
What began as a student project in Ramtin Zand’s course in 2022 on neuromorphic computing—an area of computer science that designs artificial intelligence systems inspired by the human brain—has rapidly grown into an award-winning University of South Carolina research initiative with real-world applications in education and AI.
Zand, an assistant professor in the Molinaroli College of Engineering and Computing’s Department of Computer Science and Engineering and principal investigator of the Intelligent Circuits, Architecture and Systems (iCAS) Lab, calls the journey of this facial expression recognition project a prime example of “classroom to research—and beyond.”
The work originated when a student expressed interest in applying course concepts to something practical. Teaming with a classmate, they developed a system that could identify human facial expressions using machine learning. Zand immediately saw broader potential and looped the work into a larger proposal for the National Science Foundation aimed at real-time sign language translation. Facial expressions, after all, carry linguistic meaning in American Sign Language.
The team eventually published two papers in 2023 and earned additional accolades, including a top research award at USC’s Computer Science and Engineering Symposium.
The technology’s most exciting development came through a partnership with Columbia-based Van Robotics. The AI model—trained to recognize emotional cues like frustration or boredom—was integrated with “ABii,” the company’s social robot designed to teach K-5 students math and reading. In live demos, ABii can now respond empathetically: encouraging a student to take a break when it senses frustration, or offering praise when it detects satisfaction.
“This wasn’t just about the technology,” Zand says. “It became a full pipeline of training. We took students who had never heard of machine learning and now they’re publishing, winning awards, and even going to grad school.”
The project also underscores iCAS Lab’s mission: bringing large-scale AI models to small, privacy-respecting devices. Unlike many systems that send user data to the cloud, Zand’s models run entirely on-device—meaning sensitive facial data never leaves the robot.
“We’re not just teaching AI to understand humans. We’re teaching it to support them, especially when they’re learning.”
Thanks to the early success of this project and others like it, Zand’s lab secured nearly $600,000 in NSF funding in 2024. The next frontier? Integrating large language models, such as ChatGPT-style conversational AI, with the robot to provide personalized tutoring that adapts in real-time to a student’s emotional engagement.
“We’re not just teaching AI to understand humans,” Zand said. “We’re teaching it to support them, especially when they’re learning.”
The project has already inspired new student research, additional collaborations and a roadmap for other applications—from smart manufacturing to underwater robotics.
“Sometimes it takes a long time to go from a theoretical idea in a classroom to results, and as professors, we try to keep telling students that it's possible,” Zand says. “Now we have a great story about how fast you can make a difference.”