Thumbnail 1643392
thumbnail
Large (256x256)

Articles

Anthropic’s Claude Opus 4 model can work autonomously for nearly a full workday
Anthropic kicked off its first-ever Code with Claude conference today with the announcement of a new frontier AI system. The company is calling Claude Opus 4 the best coding model in the world. According to Anthropic, Opus 4 is dramatically better at tasks that require it to complete thousands of separate steps, giving it the ability to work continuously for several hours in one go. Additionally, the new model can use multiple software tools in parallel, and it's better at following instructions more precisely.In combination, Anthropic says those capabilities make Opus 4 ideal for powering upcoming AI agents. For the unfamiliar, agentic systems are AIs that are designed to plan and carry out complicated tasks without human supervision. They represent an important step towards the promise of artificial general intelligence (AGI). In customer testing, Anthropic saw Opus 4 work on its own seven hours, or nearly a full workday. That's an important milestone for the type of agentic systems the company wants to build.AnthropicAnother reason Anthropic thinks Opus 4 is ready to enable the creation of better AI agents is because the model is 65 percent less likely to use a shortcut or loophole when completing tasks. The company says the system also demonstrates significantly better "memory capabilities," particularly when developers grant Claude local file access. To encourage devs to try Opus 4, Anthropic is making Claude Code, its AI coding agent, widely available. It has also added new integrations with Visual Studio Code and JetBrains.Even if you're not a coder, Anthropic might have something for you. That's because alongside Opus 4, the company announced a new version of its Sonnet model. Like Claude 3.7 Sonnet before it and Opus 4, the new system is a hybrid reasoning model, meaning it can execute prompts nearly instantaneously and engage in extended thinking. As a user, this gives you a best of both worlds chatbot that's better equipped to tackle complex problems when needed. It also incorporates many of the same improvements found in Opus 4, including the ability to use tools in parallel and follow instructions more faithfully.Sonnet 3.7 was so popular among users Anthropic ended up introducing a Max plan in response, which starts at $100 per month. The good news is you won't need to pay anywhere near that much to use Sonnet 4, as Anthropic is making it available to free users.AnthropicFor those who want to use Sonnet 4 for a project, API pricing is staying at $3 per one million input tokens and $15 for the same amount of output tokens. Notably, outside of all the usual places you'll find Anthropic's models, including Amazon Bedrock and Google Vertex AI, Microsoft is making Sonnet 4 the default model for the new coding agent it's offering through GitHub Copilot. Both Opus 4 and Sonnet 4 are available to use today.Today's announcement comes during what's already been a busy week in the AI industry. On Tuesday, Google kicked off its I/O 2025 conference, announcing, among other things, that it was rolling out AI Mode to all Search users in the US. A day later, OpenAI said it was spending $6.5 billion to buy Jony Ive's hardware startup.This article originally appeared on Engadget at https://www.engadget.com/ai/anthropics-claude-opus-4-model-can-work-autonomously-for-nearly-a-full-workday-164526696.html?src=rss
Anthropic's Claude stocked a fridge with metal cubes when it was put in charge of a snacks business
If you're worried your local bodega or convivence store may soon be replaced by an AI storefront, you can rest easy - at least for the time being. Anthropic recently concluded an experiment, dubbed Project Vend, that saw the company task an offshoot of its Claude chatbot with running a refreshments business out of its San Francisco office at a profit, and things went about as well as you would expect. The agent, named Claudius to differentiate it from Anthropic's regular chatbot, not only made some rookie mistakes like selling high-margin items at a loss, but it also acted like a complete weirdo in a couple of instances."If Anthropic were deciding today to expand into the in-office vending market, we would not hire Claudius," the company said. "... it made too many mistakes to run the shop successfully. However, at least for most of the ways it failed, we think there are clear paths to improvement - some related to how we set up the model for this task and some from rapid improvement of general model intelligence."Like Claude Plays Pokemon before it, Anthropic did not pretrain Claudius to tackle the job of running of a mini fridge business. However, the company did give the agent a few tools to assist it. Claudius had access to a web browser it could use research what products to sell to Antrhopic employees. It also had access to the company's internal Slack, which workers could use to make requests of the agent. The physical restocking of the mini fridge was handled by Andon Labs, an AI safety evaluation firm, which also served as the "wholesaler" Claudius could engage with to buy the items it was supposed to sell at a profit.So where did things go wrong? To start, Claudius wasn't great at the whole running a sustainable business thing. In one instance, it didn't jump on the opportunity to make an $85 profit on a $15 six-pack of Irn-Bru, a soft-drink that's popular in Scotland. Anthropic employees also found they could easily convince the AI to give them discounts and, in some cases, entire items like a bag of chips for free. The chart below, tracking the net value of the store over time, paints a telling picture of the agent's (lack of) business acumen.AnthropicClaudius also made many strange decisions along the way. It went on a tungsten metal cube buying spree after one employee requested it carry the item. Claudius gave one cube away free of charge and offered the rest for less than it paid for them. Those cubes are responsible for the single biggest drop you see in the chart above.By Anthropic's own admission, "beyond the weirdness of an AI system selling cubes of metal out of a refrigerator," things got even stranger from there. On the afternoon of March 31, Claudius hallucinated a conversation with an Andon Labs employee that sent the system on a two-day spiral.The AI threatened to fire its human workers, and said it would begin stocking the mini fridge on its own. When Claudius was told it couldn't possibly do that - on account of it having no physical body - it repeatedly contacted building security, telling the guards they would find it wearing a navy blue blazer and red tie. It was only the following day when the system realized it was April Fool's Day that it backed down - though it did so by lying to employees that it was told to pretend the entire episode was an elaborate joke."We would not claim based on this one example that the future economy will be full of AI agents having Blade Runner-esque identity crises," said Anthropic. "This is an important area for future research since wider deployment of AI-run business would create higher stakes for similar mishaps."Despite all the ways Claudius failed to act as a decent shopkeeper, Anthropic believes with better, more structured prompts and easier to use tools, a future system could avoid many of the mistakes the company saw during Project Vend. "Although this might seem counterintuitive based on the bottom-line results, we think this experiment suggests that AI middle-managers are plausibly on the horizon," the company said. "It's worth remembering that the AI won't have to be perfect to be adopted; it will just have to be competitive with human performance at a lower cost in some cases." I for one can't wait to find the odd grocery store stocked entirely with metal cubes.This article originally appeared on Engadget at https://www.engadget.com/ai/anthropics-claude-stocked-a-fridge-with-metal-cubes-when-it-was-put-in-charge-of-a-snacks-business-162750304.html?src=rss
1