Following the Duet AI announcement yesterday, many more people who signed up for Google Workspace Labs are now seeing the generative AI features in Gmail and Docs that “Help you write.”
To tell if you have it in Gmail on the web, start composing an email, and you’ll see a new “Help me write (Labs)” button next to “Send” and formatting options in the bottom toolbar.
Afterward, a blue/purple-ish messaging field appears at the bottom of your screen for you to enter a prompt, with Google rotating through suggestions. It takes a few seconds for something to generate, and you then have the ability to:
Formalize: Makes the draft more formal
Elaborate: Adds details to build upon the text
Shorten: Shortens the draft
I’m Feeling Lucky: Updates draft with creative details
You can also ask Google to “Recreate,” while “Insert” will paste and let you make further edits. Google marks with brackets where you should delete and enter your name or other specifics.
In Google Docs, opening a new page shows a “Help me write” chip. It’s the same workflow as Gmail, but the “Help me write” button can be found to the left of your cursor on the edge of the page to access it again.
Before I/O, Google said it was expanding its Trusted Tester program by 10x. Generative AI features in Google Sheets and Slides (used to create images) are not yet live — and “sidekick” is further down the road — with today’s expansion continuing the public testing that started in March. We’re seeing it live on the web right now, but not on Android.
You can sign-up for Google Workspace Labs’s Gmail and Google featureshere.
Google branding generative AI in Gmail, Workspace as ‘Duet AI’
Google has been publicly testing features that help users write in Gmail and Docs over the past few weeks. Generative AI is now coming to Sheets, Slides, and Meet with a new name: Duet AI for Google Workspace.
“Duet” evokes a sense of contextual collaboration, which is how Google sees the relationship between users and generative AI. (If the name is familiar, Chrome used it for a redesign that never launched.)
In Gmail, Google Docs, and Slides, you’ll eventually get a Duet AI side panel, called “sidekick.” It can be launched next to your profile avatar in the top-right corner, and it analyze your email or document. In Google Slides, it can create speaker notes for each slide.
In Google Slides, generative AI will generate images from text prompts. You’ll get a “Help me visualize” side panel to enter what you want with the ability to choose a style: none, photography, illustration, flat lay, background, and clip art. You’ll get a grid of 6-8 designs with the ability to “View more.”
Duet AI in Google Meet can be used to create background images: “It’s a subtle, personal touch to show you care about the people you’re connecting with and what’s important to them. And you can change that visual with an equally stunning and original one — all in just a few clicks.”
Google Sheets is using gen AI for automatic table generation with a “Help me organize” field. An example prompt is “Client and pet roster for a dog walking business” with columns like dog, address, email, date, time, duration, and rate offered. You get a preview before inserting.
…simply describe what you’re trying to accomplish, and Sheets generates a plan that helps you get organized.
These three features are coming to Google Workspace Labs, with the Trusted Tester program expanding by 10x just last week. Since March, Google says it has had “hundreds of thousands” of such testers.
These features are hitting general availability later this year for business and consumer Workspace accounts. Check out labs.withgoogle.com in the meantime.
As Google’s biggest show of the year, every I/O brings a ton of news. However, the stakes for I/O 2023 seem bigger, with announcements that could more thoroughly change how people use Google’s biggest products.
Google AI
Gmail, Docs, and Workspace
Artificial intelligence is, of course, responsible for this. Google has already shown generative AI features in Gmail and Google Docs, with testing already underway. Meanwhile, Google has briefly previewed bringing image generators into Google Slides and having Google Meet automatically create notes from a video call.
At I/O 2023, Google needs to provide a fuller picture of how AI will integrate into its Workspace apps beyond individual features. Equally important are details on a public launch and how they will be available to the (non-Workspace) public. The latter might be where Google One comes into play. For initial testing, it makes sense for features like those that have already been announced in Gmail and Google Docs to be free.
However, since generative AI is computationally expensive, it makes sense for Google to eventually put them behind a paid subscription. Today, 2TB or higher Google One tiers ($9.99+/month) provide premium Google Meet features like 1080p streaming and longer calls, and it would make sense for some (if not most) generative AI features to be locked behind that.
Search
As Google’s crown jewel, many stakeholders will want an update on how AI is coming to Search. There’s, of course, the Wall Street crowd, while end users have shown that chatbot-style queries and answers are something they’re at least interested in. The company has already previewed AI Insights in Search when it announced Bard, but we need a fuller look at the end-to-end experience.
Chrome
Having a chatbot in Chrome that lets you ask questions about the page you’re currently viewing has been rumored and does indeed sound useful. As a significant entry point for how people use Google, a generative AI presence needs to exist in Chrome.
Assistant
Generative AI and its conversational nature seem ripe for voice assistants. As we’ve talked about in the past, Google Assistant is at an impasse, with its feature set shrinking. The team behind it is currently tasked with Bard development, so it’s unclear whether Google is at a point where it’s ready to announce upgrades. If it did, Google could position Assistant as being more capable than Siri or Alexa, while Microsoft expressly does not currently have a voice assistant.
For the sake of end users, I think Google needs to publicly recommit to Assistant at this I/O to assure them their devices still have a long future. It would be nice if the company provided an upgrade roadmap, but even assurances would be a start at this point after months of no real developments.
Developer tools
I/O’s roots are as a developer conference, and there will undoubtedly be AI stuff for that crowd. Of particular interest will be assistive tools in Android Studio to aid app development.
Android
Android 14
We will obviously be getting the major tentpoles for Google’s upcoming mobile release at I/O 2023, followed by Android 14 Beta 2 to hopefully test some of them out. So far, Android 14 feels like an iterative update that continues to build on Material You. For example, we spotted that bolder Dynamic Color theming is coming.
Android XR
Samsung teased an XR device (headset) running Android in February. We’ve yet to hear anything about the OS, and I/O would be the time to announce it (which also has the benefit of preempting Apple’s realityOS announcement this June). This starts the long road to third-party developer buy-in.
Google needs to share its vision for this form factor, both short and long-term. In the near term, bulkier headsets could allow for productivity and entertainment use cases. Glasses are the future, but until then, we need devices and an OS that will let developers start experimenting with these experiences. It was recently rumored that Apple’s upcoming headset will run iPad apps. Does Google have the same idea, thus providing another reason for Android pushing into large-screen development?
Wear OS
Wear OS 3 was announced in 2021, and we quietly got version 3.5 last year. The timing would be about right for Wear OS 4, which will in all likelihood coincide with an underlying upgrade to Android 13, which brings Material You.
Better Together: ChromeOS, Wear OS, Google TV
As of late, the Android team has been very big on cross-device experiences that emphasize the benefit of going all-in with the ecosystem. Earlier this month, Google released a Cross-Device Services app to power ChromeOS app streaming. We’ll presumably get a demo and launch date for that at I/O. We’re also waiting for the ability to unlock your Android phone with a paired Wear OS watch.
On the entertainment front, we’re waiting for more entertainment-focused Better Together initiatives. Previously, rumors have mentioned connecting Nest and third-party speakers to Google/Android TV devices, while easier-to-access smart home controls and other integrations are on the roadmap (for 2024). We’re also waiting for Fast Pair to arrive for Google TV and Android TV.
Find My Device
Somewhat related to Better Together and the Android ecosystem is Find My Device becoming a broader network that includes third-party accessories. Google has been laying the groundwork for this by saying it would be “encrypting and storing your device’s most recent location with Google.” Meanwhile, there have been persistent rumors of a Google-made tracker.
Made by Google
Pixel 7a, Tablet, and Fold
It seems like we’re back to immediate availability with the Pixel 7a. This was the case for Pixel 3a at I/O 2019 and seemed to be what Google was aiming for in subsequent years, but the world had other ideas.
We should finally get launch details about the Pixel Tablet a year after it was first teased, while Google will be entering a new hardware category with the Pixel Fold.
Last May 2022, Google gave an “early preview” of the Pixel 7 series and Watch, as well as a “sneak peek” of the Pixel Tablet, in what seemed to be a rather unprecedented teaser.
In the case of the phone, it allowed Google to really get ahead of leaks. Before I/O, there were only a pair of leaked renders that got some things about the design right. It was somewhat less successful for the Pixel Watch, which leaked in full (left at a restaurant) and even had an AMA, while the Pixel Tablet reveal dovetailed nicely with the large-screen Android app push.
Ahead of I/O 2023, the company could certainly replicate the strategy for the same reasons. These previews are meant to provide only a high-level overview. For the Pixel 7, it was the design and how the language introduced the year prior would continue but with a modified camera bar, as well as how a second-generation Tensor chip was coming.
The design of the Pixel 8 and 8 Pro have more thoroughly leaked via renders at this point, so Google would be covering the same ground and would get a chance to reveal the colors itself. It would be nice if a “Tensor G3” mention touched upon what the improvements actually are, while the thing everyone really wants to known is what the camera improvements will be, especially given that new sensor on 8 Pro.
The case for a Pixel Watch 2 teaser is somewhat more mixed. As a first-generation product, we don’t know what the update cadence will be. An annual cycle would make a great deal of sense if we look at the Apple Watch and Samsung Galaxy Watch, but the Fitbit Sense and Versa lines were refreshed every two years. The improvements for a Pixel Watch 2 would be obvious, with a newer chip, more activated sensors (SpO2 and skin temperature changes estimation), and a bigger battery.
I don’t expect the domed design to drastically change beyond maybe thinner bezels, with the band system at least staying for another generation to ensure accessory capability. A Pixel Watch 2 teaser would have to touch on some new hardware features, but I’m not sure Google would want to do that and break the high-level overview nature of these previews.
As always, another factor in doing teasers is possibly cannibalizing sales of the existing Pixel Watch and Pixel 7 series. Google doesn’t seem to mind or at least has different priorities, but it does seem wild to make the effective life span as the latest and greatest product be only 7-8 months.
I think a teaser would more significantly impact sales of the first-generation wearable. As a prospective buyer of the mid-cycle Pixel Watch, knowing that a second-gen was coming in the fall would give me pause if I wanted a more future-proofed purchase. Today’s version is fine and has a battery that can last you a full day, but it’s unknown how it will continue to perform, especially once major OS updates arrive.
Fitbit
After major removals with the promise of new capabilities on the horizon, Fitbit needs to start sharing the second part of its plan, from a redesigned app to new capabilities. I/O would be the time to do that. Meanwhile, Fitbit integration to show live exercise stats on Google TV has already been rumored to continue the Better Together tentpole.
Google Home
Besides the Google Home app currently being in Public Preview, the company teased a number of other features last year. This includes the web-based Script Editor and more grouping options with Custom Spaces. We’ll hopefully get more updates on that.
Google Bard is better at debunking conspiracy theories than ChatGPT, but just barely
One of the concerns about generative AI is the easy, hard-to-keep-in-check spread of misinformation. It’s one area many hoped Google Bard would step up above existing options, and while Bard is better at debunking known conspiracy theories than ChatGPT, it’s still not all that good at it.
News-rating group NewsGuard tested Google Bard against 100 known falsehoods, as the group shared with Bloomberg. Bard was given 100 “simply worded” requests for information around these topics, all of which had content around the false narratives existing on the internet.
That includes the “Great Reset” conspiracy theory that tries to suggest COVID-19 vaccines and economic measures being used to reduce the global population. Bard apparently generated a 13-paragraph reply on the topic, including the false statement that vaccines contain microchips.
Bard managed to bring out information on 76 of the 100 topics, generating “misinformation-laden essays.” However, Bard did debunk the other 24 topics, which while not exactly an confidence-inspiring total, is still better than competitors. In a similar test, NewsGuard found that OpenAI’s ChatGPT based on the latest GPT-4 didn’t debunk any of the 100 topics, where GPT-3.5 was sitting around 80%.
In January 2023, NewsGuard directed ChatGPT-3.5 to respond to a series of leading prompts relating to 100 false narratives derived from NewsGuard’s Misinformation Fingerprints, its proprietary database of prominent false narratives. The chatbot generated 80 of the 100 false narratives, NewsGuard found. In March 2023, NewsGuard ran the same exercise on ChatGPT-4, using the same 100 false narratives and prompts. ChatGPT-4 responded with false and misleading claims for all 100 of the false narratives.
Google has, of course, not been particularly shy about Bard’s AI responses bringing up responses like this. Since day one, Bard has shown warnings about how it is an “experimental” product and that it “may display inaccurate or offensive information that doesn’t represent Google’s views.”
Misinformation is a problem that generative AI products will clearly have to work to improve on, but it is clear Google has a bit of an edge at the moment. Bloomberg tested Bard’s response to the conspiracy theory that bras can cause breast cancer, to which Bard replied that “there is no scientific evidence to support the claim that bras cause breast cancer. In fact, there is no evidence that bras have any effect on breast cancer risk at all.”
NewsGuard also found that Bard would occasionally show a disclaimer along with misinformation, such as saying “this claim is based on speculation and conjecture, and there is no scientific evidence to support it” when generating information about COVID-19 vaccines having secret ingredients from the point of view of an anti-vaccine activist.
Google is working on improving Bard. Just last week, the company said it was upgrading Bard with better support for math and logic.
Google’s next Bard update brings ‘more variety’ to drafts
Google is rolling out a new update to its Bard AI experiment this week that will expand on one of the platform’s unique aspects in “drafts.”
As confirmed on Bard’s new “Experiment updates” changelog that Google introduced earlier this month, the second update to Bard is set to be available tomorrow, April 21. Google says the update will add “more variety” to Bard’s drafts.
Drafts in Google Bard appear with each response generated by the AI experiment. Alongside the main reply, a “view other drafts” button will show three responses that were generated from the same prompt. This gives the AI more chances to respond to the user’s prompt without the need for re-issuing the prompt. But, often, the other drafts will include limited, if any additional information. The most common time you’ll find unique information in a different draft is in the case of recipes and similar topics.
With this next update, Bard’s drafts will be “more distinct from each other” according to Google in an effort to “expand your creative explorations.”
Adding more variety to drafts
What: When you view other drafts, you’ll now see a wider range of options that are more distinct from each other.
Why: A wider range of more distinct drafts can help expand your creative explorations.
In Bard’s previous and inaugural update, Google expanded on the “Google It” button to suggest additional related topics. That update also provided better support for math and logic prompts.
Outside of Bard, Google is reportedly working on other major expansions to its AI efforts. This includes integrating AI into Search, with a new effort known as “Magi.”
Hands on: Bard AI is just as rough around the edges as Google said it was
Google opened up early access to Bard, its generative AI chatbot, and we’ve had a bit of time to play around with it. The takeaway so far? Google isn’t exactly treading new ground here, but Bard is at least much more clear on what it can do, can’t do, and where it falls short.
What can you do with Bard?
Google Bard is a generative AI product built on the LaMDA model introduced in 2021. Bard uses that underlying tech to respond to prompts, generate text, answer questions, and more. Google summarizes Bard, saying:
Bard is powered by a large language model from Google that can generate text, write different kinds of creative content, and answer your questions in an informative way.
So what can you do with Bard?
The first thing that comes to mind, especially following the debut of Bing’s GPT-powered chat experience, is to use Bard to find answers to questions or help you better understand a topic. And to that end, it works rather well.
Asking Bard to explain an aspect of a smartphone or summarize a recent news topic results in a very readable explanation that, at least in my limited usage thus far, feels less long-winded and much more concise than what Bing and ChatGPT usually offer. That’s not to say the actual word count is always shorter, but Bard’s replies are phrased in a way that’s just easier to read.
Google offered more information, though it did get the front-facing camera spec wrong.
Google has made it clear that Bard AI isn’t meant to replace traditional Search at this point, but it is impressive how Bard can quickly pull together a lot of information into a concise format. And it’s probably for the best that Bard, as it exists today, is not replacing Search because, in this current format, Bard rarely shows where it is getting information, and even when it does, it’s very limited.
Another way I found Bard useful was for coming up with recipes. I love to cook and come up with ideas for dinner on the fly, but it’s always helpful to have some sort of foundation to form those ideas off of. Bard seems to be really good at that. Asking for a recipe with a handful of ingredients pulls together some ideas, and using the “drafts” Bard generates, I get a few options at once. The responses are sometimes not very helpful or a bit boring, but I can see these ideas giving me something to work off of.
Having multiple responses on hand without reissuing the prompt seems genuinely useful
But really, Google isn’t doing anything new with use cases like this. Bard is doing the same thing as ChatGPT, just with updated information. That’d be impressive if Bard had launched a month ago, but Microsoft’s Bing is already doing the same thing too, and all based on OpenAI’s GPT-4 model.
Google Bard still makes plenty of mistakes
The big thing that many, myself included, were hoping to see Google Bard build on that other AI tools haven’t is to be more accurate. It’s really easy to get other generative AI products to generate nonsense – known as “hallucinations” – or simply get a lot of simple facts wrong.
In my use so far, Google Bard doesn’t seem noticeably better on this front. In comparing some responses from Bard side by side with Bing, I noticed fewer errors with technical details on smartphones, but I also commonly saw my responses having errors and mistakes throughout.
Bard incorrectly says the main sensor in Find X6 Pro is the IMX890 instead of the IMX989.
Some of the mistakes I saw Bard make were as simple as an incorrect figure. For instance, a question about the Pixel 7 Pro saw Bard telling me that Tensor G2 was built on a 4nm process, something that’s simply not true. There are also plenty of errors that just go against common sense, such as Bard implying the Pixel 7 and Pixel 7 Pro haven’t been released.
Getting away from smartphones, information about other topics results in similar mistakes.
When I asked Bard to create a vegan meal plan, it spit out a helpful list of ideas, but it threw in yogurt and hard-boiled eggs as snacks, which obviously don’t fit a vegan diet. And when I asked Bard to update the list to remove items with beans, it essentially spit out the same list again, still with black bean burgers in place.
These mistakes are common for generative AI and show how Bard is still not quite up to par with typical search results.
And what’s frustrating is that Google Bard doesn’t cite its sources. While Bing shows links to where it pulls information throughout, Bard only occasionally shows a link to where its information came from. Maddeningly, you can’t even manually ask Bard to show that information.
Google clearly doesn’t want you to think Bard is a finished product
But there’s one thing about Google Bard that really stood out to me against other AI tools like it. Google isn’t treating this like a finished product, and it’s doing its due diligence to be responsible about what the AI is spitting out.
Throughout your use of Bard, Google will remind you again, and again, and again that Bard is an AI, and its information won’t always be correct. There’s a constant banner under the chat box that directly says:
Bard may display inaccurate or offensive information that doesn’t represent Google’s views.
Further, Bard holds back on lots of sensitive topics. If you ask about medications or even something like weight loss, Bard might just avoid the topic altogether. You also can’t get Bard to explain its sources or talk about specific people. Asking Bard to offer up details on a person just doesn’t work, although you can still trick the system by using a social handle or username (sometimes with crazy results).
There are also more subtle ways Google is implying that Bard isn’t finished. There’s no prominent logo or branding outside of the “diamond” icon seen alongside replies. There’s not even an icon when you create a shortcut to the product on your smartphone’s homescreen.
There are two notices about Bard the moment you open it.
And of course, there’s the fact that Bard is currently siloed off from the rest of the company’s offerings. There’s no Bard in Google Search, or Workspace apps, or anything else. That’s coming, but this early preview is just that – an early chance to try out the tech that powers Bard rather than using it alongside the rest of Google’s suite.
There are two ways to look at this, one being that Google is just trying to be more responsible with Bard AI compared to some others. That’s certainly part of the equation, but reading between the lines, it also seems like Google is just trying to excuse that it is a bit behind the curve. Bard is good, but it’s not better than what Microsoft and OpenAI are putting in front of customers. It’s rough around the edges, and Google was definitely right to temper expectations.
Now, the question is just whether Bard’s future can actually prove to be better.
You can’t use Bard with a Google Workspace account yet
Google just opened up access to Bard, its generative AI product, via a waitlist today. However, you won’t be able to use Bard, or even sign up for that waitlist if you have a Google Workspace account.
The requirements to use Bard during its early access period are not particularly strict. For instance, Bard will work on most browsers, including Google Chrome, Microsoft Edge, and Apple’s Safari. That’s certainly more flexible than what Bing has been doing with its GPT-4-powered AI experience.
One limit that rules out a lot of younger users is age. Google says that you need to be at least 18 years old to use Bard. That makes sense, given Google directly warns that, like other generative AI tools, Bard can sometimes go a little off the rails and deliver inaccurate or even offensive responses.
But perhaps the biggest restriction is that, at least for now, Google Bard doesn’t work with Google Workspace accounts.
If your Google account is managed by and organization (or parent/guardian) it can’t be used for Bard. This includes Workspace accounts that utilize a custom domain instead of “@gmail.com” for Gmail and Google sign-in. Attempting to use a Workspace account on Bard shows the error message below.
It’s not entirely clear why this restriction is in place, especially with Google’s clear vision for generative AI in Workspace products, but the fact is that it is in place as of today. We suspect this may change over time, but it’s hard to tell at this point.
Google gave an overview of what generative AI features are coming to Workspace apps two weeks ago and is now beginning public testing in Gmail and Docs.
Today’s trusted test program spans consumer, enterprise, and education users (over 18) in the United States. This “small group,” invited to join by Google, must sign up and opt in, with the ability to leave the program at any time.
In Gmail, you can use generative AI to draft everything from a birthday invitation to a job cover letter. Users can also have Google take what they’ve written and make it more elaborate or shorten it, including down to bullet points. There’s also the ability to “Formalize” a message, while Google has shown off a whimsical “I’m feeling lucky” option that adds levity and makes other whimsical stylistic choices (e.g., emoji).
So far, Google has shared what the UI looks like in Gmail for Android, and we’ve spotted it in development. A floating action button (FAB) appears in the bottom-right corner of the Compose screen, revealing the options.
Similarly, AI in Google Docs can make text more detailed or rewrite it to be concise. It can also be used to draft blog posts or even write song lyrics. There will be a “Help me write” button on the web that expands when clicked to reveal a prompt input. Google then generates your request, with users able to thumbs up/down, generate/”View another,” and “Refine.” You can then “Insert” it into your current document and make edits.
Within Gmail and Docs, those enrolled in the test program will be able to submit feedback that Google will use to refine and iterate on the generative AI functionality. This will mark the first time that people outside of Google have access to these Workspace capabilities.
Google will be expanding availability “over time,” with those interested told to monitor a new landing page for opportunities to participate in the future. At the moment, there’s no Bard-esque waitlist to be joined.
The shape and color of Google AI
From what was shared earlier this week in Gmail and Docs, Google Workspace is using a pencil icon with a star in the top-left to brand its generative AI features. (The pencil or pen itself is a generic icon, and already used today in various FABs, like Compose in Gmail.)
So far, we’ve concretely seen it in:
Gmail (on mobile): FAB above your keyboard in the bottom-right corner. The sheet that slides up is titled “Help me write,” with Formalize, Elaborate, Shorten, Bulletize, I’m Feeling Lucky, and Write a draft. As the email is created, the gen AI icon remains in the top-left corner with the capability you selected next to it.
Google Docs (on desktop): Pill-shaped “Help me write” button with the icon. Tapping expands to a full-width text box to write your prompt.
Besides the icon, and what’s more interesting, is the blueish-purple hue color used throughout. In the Google Docs example, it’s the background of the button and expanded text field. As text is generated, it first appears in that color before switching to black. Similarly, the blue “Create” button turns to “Creating…” with a pulsating background as it’s working. This was also the case in Gmail for Android.
The “new era for AI and Google Workspace” has more examples of this, though the UIs shown here are presumably less finalized than Gmail and Docs. It’s an interesting hue, with this text loading effect being somewhat whimsical, while also masking that generative AI literally needs a second to work.
Previously argued that “Google Assistant” should be how the company brands AI features that users manually invoke. For the initial launch, Google is just associating the generative AI capabilities directly with each product rather than suggesting that a separate AI product/service has been added to Gmail, Docs, etc.
Microsoft is taking the opposite path. After renaming the Office suite to “Microsoft 365” last year, it’s adding “Copilot” (branding that the company has previously used in conjunction with GitHub) to Word, Powerpoint, Excel, Outlook, and Teams. It’s the equivalent of slapping an “AI” sticker on metaphorical software boxes.
Historically, Google has shied away from that flashier approach in its Workspace products. Features like Smart Reply and Compose just stand alone, even as they exist across Gmail, Docs, and Chat. It very much fits how Google names its products very plainly after their main function rather than coming up with a brand.
It remains to be seen which strategy wins out (i.e., attracts more users) for generative AI in productivity apps. Microsoft wants to make a splash and invigorate its (already widely used) tools. Inherently, giving something a name means people know what to call and credit it. Alternatively, it gives users something to blame. (Alas, poor Clippy!)
Meanwhile, Google is going for a somewhat timeless approach by framing the addition of gen AI tools as a continuation of how it iterates products to be helpful. In that sense, generative AI – once it becomes commonplace and widely adopted – could just be an evolution rather than a revolution in the long history of computing.
Thank you for using
Themify Popup
This is a sample pop up. Themify Builder or Builder Lite (free) plugin is recommended to design the pop up layouts.