The 5 biggest revelations from Blake Lively's complaint against Justin Baldoni
- yesterday, 5:55 PM
- businessinsider.com
- 0
Welcome to AI Decoded, Fast Company’s weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week here.
Anthropic’s new MCP protocol quickly connects AI assistants with data and dev tools
We create technical standards to simplify and ease common ways of moving information around the internet. Email protocols (SMTP, POP3, and IMAP) let different email servers and clients talk to each other. The Bluesky protocol lets users move their content and connections among social platforms. As AI models emerge as a central part of the information ecosystem, we’ll also need standardized ways of moving information to and from them.
On Monday Anthropic offered such a standard to the online world. Its open-source Model Context Protocol (MCP) lets developers easily connect AI assistants (chatbots and agents) with databases of information (i.e. knowledge bases or business intelligence graphs) or tools (i.e. coding assistants and dev environments). At present, developers must custom-build new connectors to each resource.
“[E]ven the most sophisticated models are constrained by their isolation from data—trapped behind information silos and legacy systems,” Anthropic writes in a blog post Monday. MCP can be used to connect any kind of AI app with any data store or tool, provided both support the standard. During the preview period, developers can use MCP to connect an instance of Anthropic’s Claude chatbot running on their own computer to files and data stored on the same machine. They can also connect the chatbot to services including Google Drive, Brave Search, and Slack, via an API. The protocol will later allow developers to connect AI apps with remote servers that can serve a whole organization, Anthropic says.
We’re well past the days of chatbots spitting out text based only on their training data (primarily, content scraped from the public web). Their usefulness (and accuracy) was limited. The MCP protocol makes it simple to arm AI apps with far more diverse and reliable information. The protocol could let developers more easily build more “agentic” AI apps—that is, apps that can move between various tools and data sources, working through the steps necessary to generate a desired output.
Microsoft researchers show that “supersizing” model pretraining works on robot brains, too
The linguistic magic of ChatGPT came about when some researchers radically supersized a large language model as well as its training data. But language models aren’t the only game in town. Other kinds of AI models can also grow much smarter by scaling up training. Microsoft researchers recently showed that scaling can lead to smarter forms of embodied AI—that is, AI that interacts physically with the world, such as robots and self-driving cars.
One of the biggest challenges of training a robotic arm, for example, is teaching it to project the probable results of its next movement. One way of doing that is “world modeling,” wherein a robot’s AI brain analyzes photos, audio, and video recordings of actions in its environment to build an internal model of the space’s physical dynamics. Another method, called “behavioral cloning,” involves training the AI by having it observe human demonstrators performing specific tasks within the environment.
For their study, Microsoft researchers focused on game play within a complex multiplayer video game called Bleeding Edge, in which players strategize and use “fine-grained reactive control” during combat. They found that the AI became better at world modeling and behavior cloning as it was given more gameplay video data and more computing power to process it. The researchers observed that the rate of improvement caused by adding more data and compute closely resembled the progress seen in the training of large language models.
People building and training the AI models that power robots and self-driving cars may be able to take a lesson from LLM training when making decisions on model size and training resources. The research suggests that the transformer models invented at Google in 2017 have a unique capacity for growing smarter with increased pretraining, no matter if the model architecture is used for language generation or image generation or other kinds of AI.
Trump is reportedly considering a new “AI czar” in the White House
The incoming Trump administration is thinking seriously about adding an AI czar to the White House staff, Axios reports. The person would advise government agencies on their use of AI over the next four years, and could influence government policy regarding AI in the private sector.
Elon Musk and Vivek Ramaswamy, who will lead Trump’s so-called Department of Government Efficiency, will reportedly help select the czar. That’s in part because the incoming administration believes AI could be used to help find and eliminate government waste and fraud, including entitlement fraud.
Bloomberg reported last week that the Trump team also wants a cryptocurrency czar in the White House, and that the Trump transition team has been vetting cryptocurrency executives for the role.
More AI coverage from Fast Company:
Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.
No comments