Rabbit R1 is dead on arrival
Rabbit R1

Rabbit R1 is dead on arrival

The Rabbit R1 is garbage. It's dead before it's even out.

In CES 2024, the device that caught everyone's attention is this small device that could control your apps and website.

Rabbit R1 in Jesse Lyu's hands. Isn't that cute?

The device is pretty simple, it has a push-to-talk (PTT) button akin to a walkie talkie, and a touchscreen display that allows you to interact with it. It also has a camera that can capture and understand the contextual environment. Also, instead of relying on LLMs, it relies on LAMs. Simply put, LLM is more suitable for generating text and graphics. LAM, on the other hand, generates and performs tasks by interacting with applications.

The Rabbit R1 is a great product. Jesse Lyu showed the world the vision of what an agent centric OS can be. Even Satya Nadella said it as he compared the demo of Rabbit OS to Steve Job's launch of the original iPhone presentation in 2007. That is not a small feat.

The interesting thing is, Rabbit R1's vision of controlling different applications without the user going into the individual apps is not something radical. Way back in 2013, Steve Wozniak already had this vision of the future of apps and Siri. This is even before ChatGPT, LLM, and LAM became a thing.

Main issues I have with the Rabbit R1

If the presentation and vision was amazing, why am I hating on it?

In the presentation, the demo looks great. But a great demo needs to be backed up with reality, and to me, using the device practically will be pretty restrictive.

Firstly, you can't do things hands off.

You need to PTT to activate R1. It may seem easier and novel to trigger it unlike today's Alexa, Siri and Google Assistant. Who else had a button that does nothing but triggers an agent? Samsung Bixby. I think history has shown that people just don't like using a button vs controlling the agent via voice. Even if some group of people (for example the blind) requires a physical button, that's something you can customize with the iPhone Action button today.

Secondly, this is another addition to your existing army of devices.

Most people today have an assortment of their phone, smart watch, wireless buds, and tablet. The Rabbit R1 is now another device that you will need to carry around and charge. Why add another product needing charging and connectivity when smartphones could fulfil the virtual assistant role? Fragmenting functionality across hardware rarely sticks.

Thirdly, the use cases seem to be too restricted.

As Rabbit R1 requires PTT, it is unlikely that it will make a play in smart home arena, something that Alexa, Google Assistant, and Siri are in. It is intended, but it just feels like it limits the capability too much.

What the Rabbit R1 needs to be

So how and what should R1 be? For this to truly take off, I don't think being a standalone device will work. It needs to be embedded into either Android or iOS. In fact, I won't be surprised if either Google or Apple acquires it and embed it directly into their OS, similar to what Apple did with Siri. Afterall, why use another standalone device when you have such a powerful device (in which all your apps are on) that you already carry every day? The very recent Samsung S24 Galaxy AI launch has shown that it is possible to have embedded AI on your phone.

For this to work, I feel that there are several things that need to happen.

  1. Google and Apple need to get their act together. They cannot just treat it like the existing Siri vs Google Assistant war and just slap on some piecemeal use cases. They have to fundamentally rethink how the UI and UX could be and make it an integrated cohesive ecosystem. Google has to do a lot more than just rebranding Assistant to Bard. Gemini could be a starting point, but it could also be a limiting point if they still decide to go down the path of a more powerful LLM vs the LAM route. Also, Gemini is built more to be an AI for the flow of work, rather than to be an agent centric OS. Keeping the two separate might be a better approach as the purpose is different.
  2. Google and Apple need to enable access for this 'agent' to orchestrate all the tasks between the different apps that the user has. In other words, this app needs to be at a native OS layer that can control the apps directly, and not be something that requires permission per app.
  3. This will also have profound changes on app development. App developers today mainly design with users controlling the device. If it's to be done by the autonomous agent, they need to build as standard an added functionality of an UX that is 'agent' based to enable the 'agent' to know how it should be used. It's almost like a new form of API that is needed, and this 'agent' acts as the interaction layer. They should then 'train' the 'agent' to teach it about its intended behaviours. For example, a ride hailing app needs to teach the agent how to book the app such that it can be activated fully by voice.

The future of Rabbit R1

So yes, I do think that Rabbit R1 is dead on arrival. It has entered in this slightly unfortunate era where it is in between today's smart assistant (limited to very simple app controls) and tomorrow's smart assistant (fully autonomous agent with control over all apps), and there are heavy investments in this area.

Rabbit OS wants to do something revolutionary and disrupt the OS layer fundamentally. It wants to be that layer that has visibility across all your apps (and of course it has to be OS agnostic be it Android or iOS) and allow it to control. Will the vision and the way it works remain the same if it's acquired by Apple or Google? I doubt so, as its vision will undoubtedly need to be tied to that of the individual OS. It will be another war like Siri vs Google Assistant.

I cannot see the Rabbit R1 alone pioneering adoption levels that drive this systematic overhaul. But perhaps the inspiration it lights will fuel serious movement from major vendors over the next 2-3 years.

I still cling to faint optimism amid the cautionary tales littering the world of once hot startups. Do you think my pessimism is misplaced? Or is mobile maturity for capable, ubiquitous agents more distant than enthusiastic keynotes suggest?

Robert Ing ☁️

Helping businesses identify and mitigate their exposure to financial crime risks

11mo

I totally agree with your point that we don't need another device to forget to charge. And, in-time, I'm sure smartphones can have these capabilities built-in. But for those early adopters of tech, what's a better conversation piece than a cute, bright-orange handheld device that's already sold out haha!

To view or add a comment, sign in

Insights from the community

Explore topics