Frontier research on human-centered AI and efficient computing systems spanning the following domains:
- Human-centered computing: Interaction design theory, concepts, and paradigms in the context of end-user programming and praxisware, which refers to complex, feature-rich software that is intensively used over long periods, often in professional settings, and requires significant learning investment to master.
- Efficient ML: Making models and toolings for fast on-device inference and training systems such as with LoRA adapters, with a particular focus on action models with multimodal understanding
- Neurosymbolic AI: Combining neural networks with symbolic reasoning engines to enable more reliable, interpretable, and trustworthy AI systems
- Virtual OS: Shifting OS responsibilities, such as resource management, into the compiler to achieve hardware independence
- Should Computers Be Easy To Use? Questioning the Doctrine of Simplicity in User Interface Design Link
- Jittor: a novel deep learning framework with meta-operators and unified graph execution. Link
- Efficient Memory Management for Deep Neural Net Inference. Link
- Relay: A High-Level Compiler for Deep Learning. Link
- LLM in a flash: Efficient Large Language Model Inference with Limited Memory. Link
- Apple Intelligence Foundation Language Models. Link
- Large Language Models Are Neurosymbolic Reasoners. Link
- Theseus: an Experiment in Operating System Structure and State Management. Link