
Listen to this article in summarized format
Loading...
×Thinking Machines Lab, the AI startup founded by former OpenAI CTO Mira Murati, has introduced what it calls “interaction models”. It's a new approach to artificial intelligence designed to make conversations feel more natural and immediate.
The idea is simple but ambitious: AI that does not wait for you to finish speaking or typing before responding, but can instead engage in real time.
What are interaction models?
Interaction models are designed to process information and respond to it continuously, making AI behave more like a live conversation than a message thread. Instead of waiting for a full prompt and then replying, the system can listen, think and respond at the same time.
The company describes this as a move towards AI that collaborates more naturally with people in real time. “We think interactivity should scale alongside intelligence; the way we work with AI should not be treated as an afterthought,” the company said in a blog post.
Early preview and claims
The system, called TML-Interaction-Small, is currently in research preview and not publicly available. The company says it will release a limited preview in the coming months, with wider access planned later this year.
Key features include:
Most current AI tools are built around a simple structure: the user speaks or types, the model responds, and the exchange repeats. Thinking Machines Lab argues this slows down collaboration and limits how useful AI can be in real-world tasks.
“AI labs often treat the ability for AI to work autonomously as the model’s most important capability. As a result, today’s models and interfaces aren’t optimized for humans to remain in the loop.”
The company says its approach is meant to keep humans involved throughout the interaction, rather than pushing them out of the process. It also points to the limitations of current systems, noting that “until the user finishes typing or speaking, the model waits with no perception of what the user is doing or how the user is doing it.”
Limitations
Long sessions are still difficult because continuous audio and video quickly fill up context, so managing very long conversations remains an open challenge. The system works better for short- and medium-length interactions, but extended use still requires careful context management.
It also depends heavily on strong connectivity for smooth real-time audio and video streaming, and performance can drop if the connection is weak. In addition, larger versions of the model are currently too slow to run in real time, so scaling the system while keeping it fast is still a limitation.
Early reactions
While the announcement has generated interest in the AI community, many observers say the real test will be how the system performs in everyday use.
The company itself acknowledges that the technology is still early and experimental. “Autonomous interfaces are valuable, but in most real work, users can’t fully specify their requirements upfront and walk away—good results benefit from a collaborative process where the human stays in the loop, clarifying and giving feedback along the way,” Thinking Machines Lab said.
For now, interaction models remain a research concept. Whether they reshape how people use AI will depend on how well they work outside the lab.
The idea is simple but ambitious: AI that does not wait for you to finish speaking or typing before responding, but can instead engage in real time.
What are interaction models?
Interaction models are designed to process information and respond to it continuously, making AI behave more like a live conversation than a message thread. Instead of waiting for a full prompt and then replying, the system can listen, think and respond at the same time.
The company describes this as a move towards AI that collaborates more naturally with people in real time. “We think interactivity should scale alongside intelligence; the way we work with AI should not be treated as an afterthought,” the company said in a blog post.
Early preview and claims
The system, called TML-Interaction-Small, is currently in research preview and not publicly available. The company says it will release a limited preview in the coming months, with wider access planned later this year.
Key features include:
- Seamless dialogue management, where the model implicitly tracks whether the speaker is thinking, yielding, self-correcting or inviting a response.
- Verbal and visual interjections, allowing it to respond based on context rather than waiting for the user to finish speaking.
- Simultaneous speech, enabling the user and model to speak at the same time, such as for live translation.
- Time awareness, giving the model a sense of elapsed time during interaction.
- Concurrent tool use, allowing it to search, browse and generate UI (user interface) while continuing to speak and listen.
Most current AI tools are built around a simple structure: the user speaks or types, the model responds, and the exchange repeats. Thinking Machines Lab argues this slows down collaboration and limits how useful AI can be in real-world tasks.
“AI labs often treat the ability for AI to work autonomously as the model’s most important capability. As a result, today’s models and interfaces aren’t optimized for humans to remain in the loop.”
The company says its approach is meant to keep humans involved throughout the interaction, rather than pushing them out of the process. It also points to the limitations of current systems, noting that “until the user finishes typing or speaking, the model waits with no perception of what the user is doing or how the user is doing it.”
Limitations
Long sessions are still difficult because continuous audio and video quickly fill up context, so managing very long conversations remains an open challenge. The system works better for short- and medium-length interactions, but extended use still requires careful context management.
It also depends heavily on strong connectivity for smooth real-time audio and video streaming, and performance can drop if the connection is weak. In addition, larger versions of the model are currently too slow to run in real time, so scaling the system while keeping it fast is still a limitation.
Early reactions
While the announcement has generated interest in the AI community, many observers say the real test will be how the system performs in everyday use.
The company itself acknowledges that the technology is still early and experimental. “Autonomous interfaces are valuable, but in most real work, users can’t fully specify their requirements upfront and walk away—good results benefit from a collaborative process where the human stays in the loop, clarifying and giving feedback along the way,” Thinking Machines Lab said.
For now, interaction models remain a research concept. Whether they reshape how people use AI will depend on how well they work outside the lab.






