As indicated by much of the research material Apple has been publishing in recent months, the company is investing heavily in all sorts of artificial intelligence technologies. Apple will announce its AI strategy in June at WWDC, as part of iOS 18 and its other new OS versions.
In the latest Power On newsletter, Mark Gurman says to expect the new iPhone AI features to be powered entirely by an offline, on-device, large language model developed by Apple. You can expect Apple will tout the privacy and speed benefits of this approach.
Previously someone found code references in iOS 17.4 that referred to an on-device model called “Ajax”. Apple is also working on server-hosted versions of Ajax too.
The downside to on-device LLMs is they can’t be as powerful as models that are running on huge server farms, with tens of billions of parameters and continually updating data behind them.
However, Apple engineers can probably take advantage of the full stack vertical integration of its platforms, with software tuned to the Apple silicon chips inside its devices, to make the most out of an on-device approach. On-device models are usually much quicker to respond than trafficking a request through a cloud service, and they also have the advantage of being able to work offline in places with no or limited connectivity.
While on-device LLMs may not have the same embedded rich database of knowledge as something like ChatGPT to answer questions about all sorts of random trivia facts, they can be tuned to be very capable at many tasks. You can imagine that an on-device LLM could generate sophisticated auto-replies to Messages, or improve the interpretation of many common Siri requests, for instance.
It also dovetails neatly into Apple’s stringent adherence to privacy. There’s no harm in churning all your downloaded emails and text messages through an on-device model, as the data stays local.
On-device models may also be able to do generative AI tasks like document or image creation, based on prompts, to a decent result. Apple still has the flexibility to partner with a company like Google to fallback to something like Gemini on the server for certain tasks, too.
We’ll know for sure what Apple plans to do when it officially announces its AI strategy at WWDC. The keynote kicks off on June 10, which will see the company unveil all the new software features coming to iPhone, iPad, Mac, Apple Watch, Apple TV, Vision Pro and more.
Apple will reveal its AI cards on June 10 at WWDC, and Siri is surely going to be a key component. Generally, people want Siri to get good. You can’t just replace Siri intelligence with generative AI, however, but the two technologies make a powerful combination. What we want to see from an AI-infused Siri is actually simple.
Siri as it exists today is actually good at certain specific things.
We use Siri daily to send messages, make calls, create reminders, add things to my shopping list, play music, control lights, check the weather, check sports scores, start navigation, make voice memos, and much more.
Those are all rock solid. Siri is less reliable at summoning information. When kids ask knowledge questions all the time, and Siri should be the smoothest way to find the answer. We know in our bones that Siri is hit or miss on finding answers.
A simple test for Siri in iOS 18 is if it can eliminate punting to the web for search results as the answer. That’s where large language models excel. LLMs can be like hyper focused search engines that provide answers and not search results.
If Siri can provide more answers and less redirection, We can consider that a solid start.
Some other thoughts on this topic:
Siri is good about sourcing information when it does provide answers.
LLMs, on the other hand, will provide plausible answers that may be inaccurate.
Siri, Amazon Alexa, and Google Assistant competed on feature parity before; now the competition is over how best integrates generative AI.
Humane, the startup behind the Ai Pin hardware, has shown how generative AI should work with a voice assistant.
However, Ai Pin’s limited capabilities around actions shows where Siri + AI can excel.
Separately, the Rabbit R1 bespoke AI hardware has a different approach to actions that looks competitive.
In sum, throwing out Siri and starting over from scratch is not a serious solution. Instead, Siri should maintain its functionality while using generative AI to patch its weak spots.
According to a new analyst note from Jeff Pu at Haitong International Tech Research, Apple is planning changes to the A18 Pro chip specifically for on-device artificial intelligence. Pu also writes that Apple is ramping up A18 Pro chip production earlier than usual.
The news comes as we continue to learn more about Apple’s plans for AI features this year, including how it will balance on-device versus cloud-based solutions.
iPhone 16 Pro’s new AI-focused chip
In the investor note, Pu, who is often a reliable source for Apple chip rumors, says:
According to our supply chain checks, we are seeing growing demand for Apple’s A18, while its A17 Pro volume has stabilized since Feb. We note Apple’s A18 Pro, the 6-GPU version, will feature a larger die area (compared to A17 Pro), which could be a trend for edge AI computing.
Increasing the die area of a chip means that it can accommodate more transistors and specialized components, generally allowing increased. On the other hand, as die size increases, so do the risks of defects and design flaws. It could also impact energy efficiency and heat dissipation. This is the balance Apple will have to strike as it ramps up A18 Pro production ahead of the iPhone 16’s launch later this year.
Edge AI computing, meanwhile, refers to artificial intelligence that is processed directly on device as opposed to in the cloud. Apple is believed to be taking a split approach to its AI features this year, relying on cloud infrastructure (potentially in partnership with Google) for some features, while running other features completely on device.
Simply stated, edge AI, or “AI on the edge“, refers to the combination of edge computing and artificial intelligence to execute machine learning tasks directly on interconnected edge devices. Edge computing allows for data to be stored close to the device location, and AI algorithms enable the data to be processed right on the network edge, with or without an internet connection. This facilitates the processing of data within milliseconds, providing real-time feedback.
This isn’t the first report to suggest Apple has changes to the A18 chip planned particularly focused on artificial intelligence. A report last month suggest that the A18 will “greatly increase the number of built-in AI computing cores” with a more powerful Neural Engine.
Both the iPhone 16 and iPhone 16 Pro are rumored to are expected to feature a version of the A18 chip this year. Currently, the iPhone 15 uses the A16 chip and the iPhone 15 Pro uses the A17 Pro chip. Jeff Pu’s report today seems to suggest that only the A18 Pro, destined for the iPhone 16 Pro and iPhone 16 Pro Max, will feature the AI-focused changes.
On Sunday night March 17,2024 , Bloomberg reported that Apple is in talks with Google about licensing its Gemini technology to power some AI features coming to the iPhone. A new report from The New York Times today echoes those claims, citing “three people with knowledge of the discussions” between Apple and Google.
Today’s story corroborates what Bloomberg’s Mark Gurman was first to report on Sunday. The NYTimes reiterates:
Apple is in discussions with Google about using the search giant’s generative artificial intelligence model called Gemini for its next iPhone, as the company races to embrace a technology that has upended the tech industry.
The talks are preliminary and the exact scope of a potential deal hasn’t been defined, three people with knowledge of the discussions said. Apple has also held discussions with other A.I. companies, one of these people said, as it looks to tap into the power of a large language model capable of analyzing vast amounts of data and generating text on its own.
Citing “two people familiar with its development,” today’s report also says that Apple’s effort to “develop its own large language model” has been running behind the likes of ChatGPT and Gemini.
Bloomberg’s initial report included additional details about the scope of the talks between Apple and Google. Apple is preparing a wide array of new AI features for iOS 18, which is set to debut at WWDC in June. Bloomberg says we shouldn’t expect any announcement from Apple about a partnership until WWDC at the earliest.
Besides using Gemini to power features in its apps and services, Google offers its LLM to third-party developers. Apple is reportedly in talks with Google to license Gemini for the iPhone.
According to Bloomberg, there are “active negotiations to let Apple license Gemini, Google’s set of generative AI models, to power some new features coming to the iPhone software this year.” Apple has also talked with OpenAI, which powers Microsoft’s AI capabilities.
Apple is specifically looking to partner on cloud-based generative AI, with today’s report citing text and image generation as examples of what Gemini could be used for. At the same time, Apple is working on offering its own on-device AI models and capabilities with the upcoming iOS 18 release.
The discussions are still underway, and it’s unclear how the AI agreement will be branded. This would be a significant expansion of the existing relationship — default search engine — between the two companies.
Looking at the rest of the industry, Google announced a partnership with Samsung in February to have Gemini power summarization features in the Galaxy S24’s notes and voice recording apps, as well as keyboard. Samsung is also using Imagen 2 text-to-image diffusion for a generative editing feature in the photo gallery app. Those features all require server-side processing, but Samsung is also using an on-device version of Gemini.
Google offers Gemini in three sizes, with Pro being used by most first and third-party apps. Gemini 1.0 Pro powers the free version of gemini.google.com, while 1.0 Ultra is used in the paid Gemini Advanced tier.
Gemini 1.0 is available in stable, but Google in mid-February previewed Gemini 1.5 with a greatly expanded context window that allows for more information to be absorbed. This can make the “output more consistent, relevant and useful.”
Gemini Ultra: Largest and most capable model for highly complex tasks
Gemini Pro: Best model for scaling across a wide range of tasks
Gemini Nano: Most efficient model for on-device tasks
Bloomberg does not expect a deal to be announced until WWDC in June, with Apple opting for OpenAI or even multiple providers offered as possibilities in today’s report.
As Google’s biggest show of the year, every I/O brings a ton of news. However, the stakes for I/O 2023 seem bigger, with announcements that could more thoroughly change how people use Google’s biggest products.
Google AI
Gmail, Docs, and Workspace
Artificial intelligence is, of course, responsible for this. Google has already shown generative AI features in Gmail and Google Docs, with testing already underway. Meanwhile, Google has briefly previewed bringing image generators into Google Slides and having Google Meet automatically create notes from a video call.
At I/O 2023, Google needs to provide a fuller picture of how AI will integrate into its Workspace apps beyond individual features. Equally important are details on a public launch and how they will be available to the (non-Workspace) public. The latter might be where Google One comes into play. For initial testing, it makes sense for features like those that have already been announced in Gmail and Google Docs to be free.
However, since generative AI is computationally expensive, it makes sense for Google to eventually put them behind a paid subscription. Today, 2TB or higher Google One tiers ($9.99+/month) provide premium Google Meet features like 1080p streaming and longer calls, and it would make sense for some (if not most) generative AI features to be locked behind that.
Search
As Google’s crown jewel, many stakeholders will want an update on how AI is coming to Search. There’s, of course, the Wall Street crowd, while end users have shown that chatbot-style queries and answers are something they’re at least interested in. The company has already previewed AI Insights in Search when it announced Bard, but we need a fuller look at the end-to-end experience.
Chrome
Having a chatbot in Chrome that lets you ask questions about the page you’re currently viewing has been rumored and does indeed sound useful. As a significant entry point for how people use Google, a generative AI presence needs to exist in Chrome.
Assistant
Generative AI and its conversational nature seem ripe for voice assistants. As we’ve talked about in the past, Google Assistant is at an impasse, with its feature set shrinking. The team behind it is currently tasked with Bard development, so it’s unclear whether Google is at a point where it’s ready to announce upgrades. If it did, Google could position Assistant as being more capable than Siri or Alexa, while Microsoft expressly does not currently have a voice assistant.
For the sake of end users, I think Google needs to publicly recommit to Assistant at this I/O to assure them their devices still have a long future. It would be nice if the company provided an upgrade roadmap, but even assurances would be a start at this point after months of no real developments.
Developer tools
I/O’s roots are as a developer conference, and there will undoubtedly be AI stuff for that crowd. Of particular interest will be assistive tools in Android Studio to aid app development.
Android
Android 14
We will obviously be getting the major tentpoles for Google’s upcoming mobile release at I/O 2023, followed by Android 14 Beta 2 to hopefully test some of them out. So far, Android 14 feels like an iterative update that continues to build on Material You. For example, we spotted that bolder Dynamic Color theming is coming.
Android XR
Samsung teased an XR device (headset) running Android in February. We’ve yet to hear anything about the OS, and I/O would be the time to announce it (which also has the benefit of preempting Apple’s realityOS announcement this June). This starts the long road to third-party developer buy-in.
Google needs to share its vision for this form factor, both short and long-term. In the near term, bulkier headsets could allow for productivity and entertainment use cases. Glasses are the future, but until then, we need devices and an OS that will let developers start experimenting with these experiences. It was recently rumored that Apple’s upcoming headset will run iPad apps. Does Google have the same idea, thus providing another reason for Android pushing into large-screen development?
Wear OS
Wear OS 3 was announced in 2021, and we quietly got version 3.5 last year. The timing would be about right for Wear OS 4, which will in all likelihood coincide with an underlying upgrade to Android 13, which brings Material You.
Better Together: ChromeOS, Wear OS, Google TV
As of late, the Android team has been very big on cross-device experiences that emphasize the benefit of going all-in with the ecosystem. Earlier this month, Google released a Cross-Device Services app to power ChromeOS app streaming. We’ll presumably get a demo and launch date for that at I/O. We’re also waiting for the ability to unlock your Android phone with a paired Wear OS watch.
On the entertainment front, we’re waiting for more entertainment-focused Better Together initiatives. Previously, rumors have mentioned connecting Nest and third-party speakers to Google/Android TV devices, while easier-to-access smart home controls and other integrations are on the roadmap (for 2024). We’re also waiting for Fast Pair to arrive for Google TV and Android TV.
Find My Device
Somewhat related to Better Together and the Android ecosystem is Find My Device becoming a broader network that includes third-party accessories. Google has been laying the groundwork for this by saying it would be “encrypting and storing your device’s most recent location with Google.” Meanwhile, there have been persistent rumors of a Google-made tracker.
Made by Google
Pixel 7a, Tablet, and Fold
It seems like we’re back to immediate availability with the Pixel 7a. This was the case for Pixel 3a at I/O 2019 and seemed to be what Google was aiming for in subsequent years, but the world had other ideas.
We should finally get launch details about the Pixel Tablet a year after it was first teased, while Google will be entering a new hardware category with the Pixel Fold.
Last May 2022, Google gave an “early preview” of the Pixel 7 series and Watch, as well as a “sneak peek” of the Pixel Tablet, in what seemed to be a rather unprecedented teaser.
In the case of the phone, it allowed Google to really get ahead of leaks. Before I/O, there were only a pair of leaked renders that got some things about the design right. It was somewhat less successful for the Pixel Watch, which leaked in full (left at a restaurant) and even had an AMA, while the Pixel Tablet reveal dovetailed nicely with the large-screen Android app push.
Ahead of I/O 2023, the company could certainly replicate the strategy for the same reasons. These previews are meant to provide only a high-level overview. For the Pixel 7, it was the design and how the language introduced the year prior would continue but with a modified camera bar, as well as how a second-generation Tensor chip was coming.
The design of the Pixel 8 and 8 Pro have more thoroughly leaked via renders at this point, so Google would be covering the same ground and would get a chance to reveal the colors itself. It would be nice if a “Tensor G3” mention touched upon what the improvements actually are, while the thing everyone really wants to known is what the camera improvements will be, especially given that new sensor on 8 Pro.
The case for a Pixel Watch 2 teaser is somewhat more mixed. As a first-generation product, we don’t know what the update cadence will be. An annual cycle would make a great deal of sense if we look at the Apple Watch and Samsung Galaxy Watch, but the Fitbit Sense and Versa lines were refreshed every two years. The improvements for a Pixel Watch 2 would be obvious, with a newer chip, more activated sensors (SpO2 and skin temperature changes estimation), and a bigger battery.
I don’t expect the domed design to drastically change beyond maybe thinner bezels, with the band system at least staying for another generation to ensure accessory capability. A Pixel Watch 2 teaser would have to touch on some new hardware features, but I’m not sure Google would want to do that and break the high-level overview nature of these previews.
As always, another factor in doing teasers is possibly cannibalizing sales of the existing Pixel Watch and Pixel 7 series. Google doesn’t seem to mind or at least has different priorities, but it does seem wild to make the effective life span as the latest and greatest product be only 7-8 months.
I think a teaser would more significantly impact sales of the first-generation wearable. As a prospective buyer of the mid-cycle Pixel Watch, knowing that a second-gen was coming in the fall would give me pause if I wanted a more future-proofed purchase. Today’s version is fine and has a battery that can last you a full day, but it’s unknown how it will continue to perform, especially once major OS updates arrive.
Fitbit
After major removals with the promise of new capabilities on the horizon, Fitbit needs to start sharing the second part of its plan, from a redesigned app to new capabilities. I/O would be the time to do that. Meanwhile, Fitbit integration to show live exercise stats on Google TV has already been rumored to continue the Better Together tentpole.
Google Home
Besides the Google Home app currently being in Public Preview, the company teased a number of other features last year. This includes the web-based Script Editor and more grouping options with Custom Spaces. We’ll hopefully get more updates on that.
Google Bard is better at debunking conspiracy theories than ChatGPT, but just barely
One of the concerns about generative AI is the easy, hard-to-keep-in-check spread of misinformation. It’s one area many hoped Google Bard would step up above existing options, and while Bard is better at debunking known conspiracy theories than ChatGPT, it’s still not all that good at it.
News-rating group NewsGuard tested Google Bard against 100 known falsehoods, as the group shared with Bloomberg. Bard was given 100 “simply worded” requests for information around these topics, all of which had content around the false narratives existing on the internet.
That includes the “Great Reset” conspiracy theory that tries to suggest COVID-19 vaccines and economic measures being used to reduce the global population. Bard apparently generated a 13-paragraph reply on the topic, including the false statement that vaccines contain microchips.
Bard managed to bring out information on 76 of the 100 topics, generating “misinformation-laden essays.” However, Bard did debunk the other 24 topics, which while not exactly an confidence-inspiring total, is still better than competitors. In a similar test, NewsGuard found that OpenAI’s ChatGPT based on the latest GPT-4 didn’t debunk any of the 100 topics, where GPT-3.5 was sitting around 80%.
In January 2023, NewsGuard directed ChatGPT-3.5 to respond to a series of leading prompts relating to 100 false narratives derived from NewsGuard’s Misinformation Fingerprints, its proprietary database of prominent false narratives. The chatbot generated 80 of the 100 false narratives, NewsGuard found. In March 2023, NewsGuard ran the same exercise on ChatGPT-4, using the same 100 false narratives and prompts. ChatGPT-4 responded with false and misleading claims for all 100 of the false narratives.
Google has, of course, not been particularly shy about Bard’s AI responses bringing up responses like this. Since day one, Bard has shown warnings about how it is an “experimental” product and that it “may display inaccurate or offensive information that doesn’t represent Google’s views.”
Misinformation is a problem that generative AI products will clearly have to work to improve on, but it is clear Google has a bit of an edge at the moment. Bloomberg tested Bard’s response to the conspiracy theory that bras can cause breast cancer, to which Bard replied that “there is no scientific evidence to support the claim that bras cause breast cancer. In fact, there is no evidence that bras have any effect on breast cancer risk at all.”
NewsGuard also found that Bard would occasionally show a disclaimer along with misinformation, such as saying “this claim is based on speculation and conjecture, and there is no scientific evidence to support it” when generating information about COVID-19 vaccines having secret ingredients from the point of view of an anti-vaccine activist.
Google is working on improving Bard. Just last week, the company said it was upgrading Bard with better support for math and logic.
Google’s next Bard update brings ‘more variety’ to drafts
Google is rolling out a new update to its Bard AI experiment this week that will expand on one of the platform’s unique aspects in “drafts.”
As confirmed on Bard’s new “Experiment updates” changelog that Google introduced earlier this month, the second update to Bard is set to be available tomorrow, April 21. Google says the update will add “more variety” to Bard’s drafts.
Drafts in Google Bard appear with each response generated by the AI experiment. Alongside the main reply, a “view other drafts” button will show three responses that were generated from the same prompt. This gives the AI more chances to respond to the user’s prompt without the need for re-issuing the prompt. But, often, the other drafts will include limited, if any additional information. The most common time you’ll find unique information in a different draft is in the case of recipes and similar topics.
With this next update, Bard’s drafts will be “more distinct from each other” according to Google in an effort to “expand your creative explorations.”
Adding more variety to drafts
What: When you view other drafts, you’ll now see a wider range of options that are more distinct from each other.
Why: A wider range of more distinct drafts can help expand your creative explorations.
In Bard’s previous and inaugural update, Google expanded on the “Google It” button to suggest additional related topics. That update also provided better support for math and logic prompts.
Outside of Bard, Google is reportedly working on other major expansions to its AI efforts. This includes integrating AI into Search, with a new effort known as “Magi.”
Hands on: Bard AI is just as rough around the edges as Google said it was
Google opened up early access to Bard, its generative AI chatbot, and we’ve had a bit of time to play around with it. The takeaway so far? Google isn’t exactly treading new ground here, but Bard is at least much more clear on what it can do, can’t do, and where it falls short.
What can you do with Bard?
Google Bard is a generative AI product built on the LaMDA model introduced in 2021. Bard uses that underlying tech to respond to prompts, generate text, answer questions, and more. Google summarizes Bard, saying:
Bard is powered by a large language model from Google that can generate text, write different kinds of creative content, and answer your questions in an informative way.
So what can you do with Bard?
The first thing that comes to mind, especially following the debut of Bing’s GPT-powered chat experience, is to use Bard to find answers to questions or help you better understand a topic. And to that end, it works rather well.
Asking Bard to explain an aspect of a smartphone or summarize a recent news topic results in a very readable explanation that, at least in my limited usage thus far, feels less long-winded and much more concise than what Bing and ChatGPT usually offer. That’s not to say the actual word count is always shorter, but Bard’s replies are phrased in a way that’s just easier to read.
Google offered more information, though it did get the front-facing camera spec wrong.
Google has made it clear that Bard AI isn’t meant to replace traditional Search at this point, but it is impressive how Bard can quickly pull together a lot of information into a concise format. And it’s probably for the best that Bard, as it exists today, is not replacing Search because, in this current format, Bard rarely shows where it is getting information, and even when it does, it’s very limited.
Another way I found Bard useful was for coming up with recipes. I love to cook and come up with ideas for dinner on the fly, but it’s always helpful to have some sort of foundation to form those ideas off of. Bard seems to be really good at that. Asking for a recipe with a handful of ingredients pulls together some ideas, and using the “drafts” Bard generates, I get a few options at once. The responses are sometimes not very helpful or a bit boring, but I can see these ideas giving me something to work off of.
Having multiple responses on hand without reissuing the prompt seems genuinely useful
But really, Google isn’t doing anything new with use cases like this. Bard is doing the same thing as ChatGPT, just with updated information. That’d be impressive if Bard had launched a month ago, but Microsoft’s Bing is already doing the same thing too, and all based on OpenAI’s GPT-4 model.
Google Bard still makes plenty of mistakes
The big thing that many, myself included, were hoping to see Google Bard build on that other AI tools haven’t is to be more accurate. It’s really easy to get other generative AI products to generate nonsense – known as “hallucinations” – or simply get a lot of simple facts wrong.
In my use so far, Google Bard doesn’t seem noticeably better on this front. In comparing some responses from Bard side by side with Bing, I noticed fewer errors with technical details on smartphones, but I also commonly saw my responses having errors and mistakes throughout.
Bard incorrectly says the main sensor in Find X6 Pro is the IMX890 instead of the IMX989.
Some of the mistakes I saw Bard make were as simple as an incorrect figure. For instance, a question about the Pixel 7 Pro saw Bard telling me that Tensor G2 was built on a 4nm process, something that’s simply not true. There are also plenty of errors that just go against common sense, such as Bard implying the Pixel 7 and Pixel 7 Pro haven’t been released.
Getting away from smartphones, information about other topics results in similar mistakes.
When I asked Bard to create a vegan meal plan, it spit out a helpful list of ideas, but it threw in yogurt and hard-boiled eggs as snacks, which obviously don’t fit a vegan diet. And when I asked Bard to update the list to remove items with beans, it essentially spit out the same list again, still with black bean burgers in place.
These mistakes are common for generative AI and show how Bard is still not quite up to par with typical search results.
And what’s frustrating is that Google Bard doesn’t cite its sources. While Bing shows links to where it pulls information throughout, Bard only occasionally shows a link to where its information came from. Maddeningly, you can’t even manually ask Bard to show that information.
Google clearly doesn’t want you to think Bard is a finished product
But there’s one thing about Google Bard that really stood out to me against other AI tools like it. Google isn’t treating this like a finished product, and it’s doing its due diligence to be responsible about what the AI is spitting out.
Throughout your use of Bard, Google will remind you again, and again, and again that Bard is an AI, and its information won’t always be correct. There’s a constant banner under the chat box that directly says:
Bard may display inaccurate or offensive information that doesn’t represent Google’s views.
Further, Bard holds back on lots of sensitive topics. If you ask about medications or even something like weight loss, Bard might just avoid the topic altogether. You also can’t get Bard to explain its sources or talk about specific people. Asking Bard to offer up details on a person just doesn’t work, although you can still trick the system by using a social handle or username (sometimes with crazy results).
There are also more subtle ways Google is implying that Bard isn’t finished. There’s no prominent logo or branding outside of the “diamond” icon seen alongside replies. There’s not even an icon when you create a shortcut to the product on your smartphone’s homescreen.
There are two notices about Bard the moment you open it.
And of course, there’s the fact that Bard is currently siloed off from the rest of the company’s offerings. There’s no Bard in Google Search, or Workspace apps, or anything else. That’s coming, but this early preview is just that – an early chance to try out the tech that powers Bard rather than using it alongside the rest of Google’s suite.
There are two ways to look at this, one being that Google is just trying to be more responsible with Bard AI compared to some others. That’s certainly part of the equation, but reading between the lines, it also seems like Google is just trying to excuse that it is a bit behind the curve. Bard is good, but it’s not better than what Microsoft and OpenAI are putting in front of customers. It’s rough around the edges, and Google was definitely right to temper expectations.
Now, the question is just whether Bard’s future can actually prove to be better.
You can’t use Bard with a Google Workspace account yet
Google just opened up access to Bard, its generative AI product, via a waitlist today. However, you won’t be able to use Bard, or even sign up for that waitlist if you have a Google Workspace account.
The requirements to use Bard during its early access period are not particularly strict. For instance, Bard will work on most browsers, including Google Chrome, Microsoft Edge, and Apple’s Safari. That’s certainly more flexible than what Bing has been doing with its GPT-4-powered AI experience.
One limit that rules out a lot of younger users is age. Google says that you need to be at least 18 years old to use Bard. That makes sense, given Google directly warns that, like other generative AI tools, Bard can sometimes go a little off the rails and deliver inaccurate or even offensive responses.
But perhaps the biggest restriction is that, at least for now, Google Bard doesn’t work with Google Workspace accounts.
If your Google account is managed by and organization (or parent/guardian) it can’t be used for Bard. This includes Workspace accounts that utilize a custom domain instead of “@gmail.com” for Gmail and Google sign-in. Attempting to use a Workspace account on Bard shows the error message below.
It’s not entirely clear why this restriction is in place, especially with Google’s clear vision for generative AI in Workspace products, but the fact is that it is in place as of today. We suspect this may change over time, but it’s hard to tell at this point.
Samsung’s midrange devices are generally seen as some of the better phones on the market, partially due to what Samsung hides inside. To keep that going, Samsung is ready to equip the net generation of midrange devices with its newest chip, the Exynos 1380.
The Exynos 1380 brings to the table a couple of minor improvements on the overall performance.. The chip follows the 5 nm EUV process and comes with 4 Cortex-A78 and 4 Cortex-A55 cores. To pair, the 1380 incorporates an Arm Mali-G68 MP5 GPU and an AI engine that goes a little further.
According to Samsung, the new AI engine can handle more advanced language recognition specifically for voice assistants. The broader AI capabilities also expand into image recognition, enhancing the SoC’s ability to identify and process images and details. This comes as Samsung focuses more on AI-processed images.
Interestingly enough, the Exynos 1380 from Samsung can also support a camera of up to 200MP – quite the jump in megapixel count for midrange devices. With that, it can also support 4K at 30fps and utilizes USF 3.1 storage for quick saving and recall.
As a successor to the Exynos 1280, the Exynos 1380 is meant to be a midrange chip, likely used in upcoming A series devices. Last year, the Galaxy A33 found itself with the Exynos 1280, so it would be easy to assume that the upcoming Galaxy A34 would see Samsung’s newest SoC, though some regions may see the Dimensity MT6877V. The Galaxy A34 is set to come with 6Gb of RAM and 256GB expandable storage, according to the latest leaks.
Power like a pro
Experiences powered up. With powerful performance, pro-grade camera, and on-device artificial intelligence (AI), the Exynos 1380 5G mobile processor will upgrade your mobile experience to pro-grade.
Pro-grade power
Load fast. Multitask in a flash. The octa-core CPU of the Exynos 1380 processor consists of four high-performance cores that enable fast app loading and multitasking – along with four power-efficient cores that drive long-lasting battery life. Furthermore, the advanced scheduler allocates tasks to appropriate CPU cores for fast and power-efficient computing. With the optimal balance to manage intensive and always-on tasks, the Exynos 1380 processor is designed to unlock new experiences, enhanced with 5G and AI technologies.
Gaming. Beyond.
Level up with great ease. Equipped with the Arm® Mali™-G68 GPU that features five cores running at 950 MHz, the Exynos 1380 offers powerful and steady graphics processing performance for an immersive and steady 3D gaming experience. With its enhanced performance and the advanced API supports, the Exynos 1380 offers users a new kind of gameplay experience based on augmented reality.* The GPU also has efficient power consumption to help prolong battery life for entertainment on the go.
* Based on internal test result compared to the Exynos 1280
Intelligent intelligence
Unlock the potential of mobile experiences. The Exynos 1380 is designed to enable new mobile experiences with an AI engine featuring an enhanced NPU that supports up to 4.9 trillion operations per second.* With the on-device AI capabilities, the Exynos 1380 enables new and smarter mobile experiences such as advanced language recognition for voice assistance. Notably, the Exynos 1380 with NPU enables multiple object recognition in the image to enhance the quality of each object.*
* Based on internal test result compared to the Exynos 1280.
When cameras meet AI
Pro-grade camera for all. The Exynos 1380 features the advanced Triple Image Signal Processor (ISP) based on the cutting-edge technology of flagship processors. The ISP offers flagship-level camera features including up to 200MP image sensor support, zero shutter-lag at up to 64MP, High Dynamic Range, and Electronic Image Stabilization. With cutting-edge AI imaging technology, the Exynos 1380 can recognize various objects to provide optimal image processing of each object, resulting in great photo quality.
Vivid screen Smooth experience
Built for visual comfort. With a fast display refresh rate up to 144Hz at Full HD+, the Exynos 1380 enables a seamless viewing experience and smooth scrolling. Adaptive Tone Control technology adjusts brightness and contrast according to the ambient light to improve visibility, whatever the weather, even in very bright outdoor environments.
Hit 5G speeds
Performance accelerated with 5G. Equipped with an integrated 5G modem, the Exynos 1380 offers fast download speeds up to 3.67 Gbps and upload speeds up to 1.28 Gbps. With this speed and low latency of 5G, the Exynos 1380 supports the user experiences that require lightning-fast network speeds such as live broadcasts or streaming on the go.
5 Companies you didn’t know uses AI to do business
Yes, you use them a lot of times now. But some of them can’t really function without the help of Artificial Intelligence? Here’s the list.
You use them everyday, they get the job done for you. They give you instant gratification. But did you know that all these companies were able to deliver services like social media interaction and giving you a good deal for your orders using Artificial Intelligence?
1. Amazon
Amazon was the biggest online retail service provider around and have since been so with the help of artificial intelligence. Their Amazon Machine Learning platform provides companies with the ability to predict and find patterns using data. Additionally, Amazon Echo brings artificial intelligence into the home through the intelligent voice server, Alexa.
2. Google
Google has been on the frontier of artificial intelligence and having acquired 9 AI startups, Google is deeply invested in furthering artificial intelligence capabilities. Their main research focus is on machine learning which helps advance Google’s language, speech translation, visual processing, ranking and prediction capabilities.
3. Facebook
Yup, the social media service with more than 3 billion users around the world, Facebook has made strategic investments in artificial intelligence to operate more efficiently and to make sense of the data being shared on the social media network. To date, Facebook has opened three artificial intelligence labs — its newest lab opened in Paris last year. In addition to their AI labs, Facebook have acquired two AI companies — Face.com, a face recognition company, and Wit.ai, whose technology lets developers create text or voice based bots.
4. Intel
Intel has acknowledged the importance of artificial intelligence and their desire to stay ahead of the curve through backing and investing in AI technologies. The company touts its commitment to open source with optimized machine learning frameworks and libraries, as well their acquisition of Nervana systems, enabling them to take advantage of their machine learning experts.
5. Twitter
Twitter has invested significant funds into artificial intelligence. They have acquired 4 AI companies to date. Their latest acquisition of the AI tech startup, Magic Pony, cost them a cool $150 million. Twitter plan to harness the expertise gained through these acquisitions to become a key player in the video space.
Thank you for using
Themify Popup
This is a sample pop up. Themify Builder or Builder Lite (free) plugin is recommended to design the pop up layouts.