Google unveils AI tools for Gmail, Google Docs, Meet, and more


Take the editor: As a longtime electronic musician (and former editor of electronic music and electronic music magazines), I’ve always loved mixing music. Utilizing a specialized set of circuits, these instruments are designed to generate a huge range of interesting sounds from relatively basic audio material. In many ways, today’s rapidly growing array of AI tools bears some interesting similarities to it in that it can synthesize impressive content from collections of simple word-like “tokens” (albeit billions of them!). Generative AI tools are, in a very real sense, content synthesis tools.

The latest entry to the content syntax controversies comes from Google, which brings an impressive collection of new capabilities in the market Through Google Cloud updates and the Google Workspace productivity suite (Workspace, formerly G Suite, consists of Gmail, Google Calendar, Google Drive, Google Docs, and Google Meet).

After allowing Microsoft to take a lot of attention Over the past few weeks through the OpenAI ChatGPT partnership — to the point where articles questioning Google’s ambitions for generative AI have begun to appear — it’s clear that the company long seen as a pioneer in AI hasn’t settled on its glories. Today’s debut offers a comprehensive set of exciting new apps, services, and approaches that show that Google has no intention of ceding the generative AI market to anyone.

The company has unveiled several new capabilities for the new Google Cloud Generative AI App Builder for professional developers, upcoming capabilities for all Google Workspace productivity apps, and Maker Suite for less experienced “native developers,” a new program sift Large language model (LLM), ability to integrate third party applications and LLM into its suite of offerings.

Honestly, it’s an overwhelming amount of information to take in in one place, but it proves, if nothing else, that a lot of people at Google have been working on this information for a long time.

However, not all capabilities will be available immediately. Google has laid out a vision of some of the things it’s done now and shared where it’s headed in the future, but in the incredibly dynamic market that is generative AI, the company clearly felt compelled to make a statement.

Some of the most interesting aspects of Google’s vision for generative AI revolve around openness and the ability to collaborate with other companies. For example, Google has talked about the idea of ​​a “zoo” enterprise model where different LLMs can be connected to different applications. So, for example, while you could certainly use Google’s newly upgraded PaLM (Pathways Language Model) script or PaLM chat models in enterprise applications via API calls, you could also use third-party or even open-source LLM in place.

The degree of flexibility was impressive with many LLMs, though I couldn’t help but think that corporate IT departments could quickly start to feel overwhelmed with the array of options that would be available. Given the inevitable demands for testing and compliance, there may be some value in limiting the number of options organizations can use (at least initially).

Google has made a big point of emphasizing that organizations can combine their own data on top of Google’s LLMs to make them customized to the organization’s unique needs. For example, companies can ingest some of their original content, images, styles, etc., into an existing LLM, and that custom model can then be used as the primary AI engine for enterprise content aggregation applications. These customizations can be particularly attractive to many organizations.

There’s also been a lot of announcements about the partnerships Google is forging with a variety of different vendors, from little-known AI startups like AI21Labs and Osmo to fast-rising developers like code-generation tool maker Replit or LLM developers Anthropic and Cohere. On the generative imagery side, they highlight working with Midjourney, which allows not only the initial creation of images via text descriptions, but also text-based editing and enhancements.

Google has also made sure to emphasize customizability within existing models. The company has shown how individuals can adjust settings for various model parameters as part of their initial query to set the level of accuracy, creativity, and more they can expect from the output. Unfortunately, in classic Google fashion, engineering-specific terms were used for some of these parameters which makes it unclear if regular users will actually be able to understand them. However, the concept behind it is great, and fortunately, the parameter syntax can be modified.

Admittedly, other generative AI tools have demonstrated these kinds of capabilities, but the user interface and overall experience model that Google demonstrated seemed very intuitive.

Some of the more interesting demos Google made for Workspace included the ability to modify existing content (for example, from a more formal written tone to informal language) or extrapolate from a relatively limited input prompt. Admittedly, other generative AI tools have already demonstrated these kinds of capabilities, but the user interface and overall experience model that Google demonstrated seemed very intuitive.

Among the key AI features coming to Workspace, Google highlights:

  • Draft, reply, summarize, and prioritize your Gmail
  • Brainstorm, proofread, write and rewrite documents
  • Bring your creative visions to life with automatically generated images, audio, and video in your presentations
  • Navigate from raw data to insights and analysis with autocomplete, equation creation, and contextual classification in spreadsheets
  • Create new backgrounds and take notes in Meet
  • Enable workflow to get things done in the chat

In addition to software, Google touched on the hardware side of the Google Cloud infrastructure that is able to support all of these efforts for both Vertex AI and Workspace. The company indicated how many of these workloads are being run by different groups of their TPU Beside Powerful Nvidia GPUs. While much of the focus on generative AI applications has been solely on software, there is no doubt that hardware innovations in the semiconductor and server space will continue to greatly influence AI developments.

Returning to the synthesizer analogy, the LLM developments that Google’s new offering highlights in many ways reflect the variety of different audio engines and architectures used in their design. Just as there are many types of synthesizers, with the primary differences coming from the raw source material used in the audio engine and the signal stream you progress through, I also expect to see more variety in a constitutive LLM. There will likely be a variety of source materials used in different models and different builds from which they will be processed. Likewise, the degree of “programmability” is likely to vary quite a bit as well, from a modest number of pre-defined options to the full (but potentially overwhelming) flexibility of modularity – just as it exists in the world of assemblies.

In terms of availability, many of Google’s new capabilities are initially limited to a group of trusted testers, and pricing (and even purchase options) for these services remain undisclosed.

For casual users, some of the text-based content creation tools in Docs and Gmail are likely to be the first taste of Google-driven generative AI that many are likely to experience. And Like MicrosoftFuture iterations and improvements will undoubtedly come at a very fast pace.

There is no doubt that we have entered a very exciting and competitive new era in enterprise computing and the technology world in general. Generative AI tools have sparked a dizzying array of potential new applications and productivity improvements that we’re just beginning to rack our brains over. As with many big tech trends, exaggeration is inevitable. However, it is also clear that Google has now firmly established itself in the rapidly developing world of generative AI tools and services. What will happen next is not clear, but watching it will be very exciting.

Bob O’Donnell is the founder and chief analyst Technical Analysis Research, LLC A technology consulting firm providing strategic advice and market research services to the technology industry and the financial professional community. You can follow him on Twitter @tweet.





Source link

Related Posts

Precaliga