ja i is the answer. what was the question? please welcome vice president professional services, aws francesca vasquez.
Thank you and welcome to re invent 2023.
Technology is transforming every imaginable industry and generative a i is at the forefront, it has captured our imaginations for its ability to create images and videos, write stories and even generate code. as you heard from adam earlier this week. generative a i is truly an expansive opportunity at every level across every customer segment. it is going to be fundamental to how the world businesses, consumers and everybody in between operates.
Now i get that there is a lot of hype around generative a i. so my goal for today's session is to translate some of that hype into the real impact for how companies are building production, ready architectures that are delivering innovation, we have a lot to cover. so let's dive into how we've gotten to this moment.
We've reached a critical tipping point for generative a i with the massive proliferation of data, highly scalable compute and the advancement of ml technologies with deep learning model architectures like transformers, vision, transformers and diffusion, generative a i is finally taking shape.
In particular, it's these advancements in machine learning that have made generative a i possible. Just in the last few years, traditional forms of machine learning, they were good. they allowed us to take very simple inputs like numeric values and map them to very simple outputs like predicted values.
With deep learning, we could then take complex inputs like videos and images and then map them to relatively simple outputs. Traditional machine learning is still important and very useful across a variety of workloads.
Traditional machine learning models use architectures that required months of costly and manual data preparation, data labeling and model training all to produce a model for one single specific task.
With generative a i, we can leverage massive amounts of data to capture and present knowledge in far more advanced ways. Now being able to map complex inputs to complex outputs the large models that power generative a i applications called foundational models or f ms. they're driven by the transformer based neural net architecture.
And it is this architecture that enables models to be pre trained on the massive amounts of unlabeled data such that they can be used out of the box for a wide variety of generalized tasks and they can be easily adapted for particular domains or industries with relatively small amounts of data.
So how do we interact with these foundation models? Well, the answer is very simple. We use prompts, prompts are clear descriptive directions that guide the model to generate exactly what we want. i like to say, prompts are the new u i. you can describe what you want the foundation model to generate what format you actually want the output to be in. and even provided additional context like data from your databases or documents through techniques like retrieval augmented generation or what many of you this week have heard as rag
Customers, they are discovering that despite what others might have you believe. And despite the cutting edge nature, generative a i is increasingly user friendly and it's easy to integrate into many of your applications.
As customers are building generative a i architectures. There's 45 core design principles that are having real impact. Many of our customers are finding that generative a i is easier than they thought with their developers able to get up and running quickly without any specialized ml expertise.
Second, you heard this in swami's keynote earlier, customers want choice, they want greater innovation, they want better experiences, more personalized experiences and more business impact.
Third, your data is your differentiator. Your data cloud strategy is mission critical. If you're going to have a generative a i strategy,
Fourth, security should always be top priority. We're gonna talk a little bit about how you're gonna build with security and responsible a i in mine.
And fifth, probably my favorite. There's no gen a i without the cloud.
So let's dive in building with gen a i is much easier than all of you think our goal is to make sure that you don't actually have to be a machine learning professional to do this, which is why we launched and we built bedrock.
Amazon bedrock is a fully managed service that offers a choice of high performing foundational models. With bedrock. customers can easily build and scale a i applications and uh using a selection of foundation models by just accessing a single api no management of infrastructure, your developers just point and access a single api.
We also make it very easy through bedrock to easily customize your own private model for a specific task by simply pointing to a few labeled data examples in amazon s3, bedrock supports r a as i mentioned as a way of then leveraging that data with foundation models to produce customized high quality output.
Another exciting area for us bedrock now has support for agents that are able to actually execute multi step task. And of course, the data that we put into bedrock, it's encrypted and your content and data, neither of which are used to improve the actual base models.
We also don't share the content and data with third party providers as well. So when we say it's easy to get up and running and to use generative a i, we mean it with amazon bedrock, we want you to be able to get up and running with just a few lines of code, you just import our sdk, select your model, send your prompt and then the way i view it as this is the hello world of generative a i on bedrock.
i might be older than a few of you. We also just announced party rock. this is a new website that you can build a i generated applications. it's a playground that's powered by bedrock. it's just a fast and fun. very easy way to learn about generative a i and get a sense for how easy it is to start building
few other things. another easy way to get started with generative a i on aws is through amazon code whisper. this is our a i coding companion designed for developers. code whisper generates code recommendations from plain language based on contextual information like your code. and it's super easy to start using. you can have this support it with your favorite i ds and also if you prefer the command line
now, customers want choice. let's talk about what that means. we support a variety of different foundation models from a i 21 labs. so you'll hear from today, anthropic cohere meta stability a i and of course our own foundational model known as amazon titan
bedrock also provides choice when it comes to the size of these foundation models. this offers a lot more flexibility to get the latency and cost characteristics that you need for your various applications. so for instance, with a i 21 labs who you'll hear from in a moment. they provide both their jurassic tid and ultra size models through amazon bedrock.
all of this early success on amazon bedrock is built on several years of experience making our open source ll ms available to our customers. and amazon stage maker jumpstart and jumpstart. jumpstart offers machine learning practitioners more control, more choice, more customization options through services like amazon sagemaker studio, the sage maker sdk and the sage maker console, new models are being added every single day.
and with that, i now would like to invite the co ceo and co-founder of a i 21 labs to share with all of you how they're pioneering state of the art language models and architecture systems to transform all industries. everyone. please welcome ori goshen to the stage.
hi, everyone. uh great being here. thanks for francesca. i'm super excited to be here. i'm or goshen. i'm the co-founder and co co of a i 21 labs. uh we are a leader in generative a i. uh we're breaking down the barriers for adoption in the enterprise with our state of the art language models.
our mission is to build reliable and capable a i systems to empower businesses and professionals. customers combine our models with the organizational data and workflows to build their own a i based applications. today, our models are embedded in thousands of applications across industries like retail, financial services and health care and education
but before i start, i'd actually like to make a prediction. two years from now, we won't be talking about large language models. instead, we'll be speaking about a i systems. so today i'm going to talk about how a i 21 leads is leading the way in this transition into a i systems.
a 21 was founded in 2017 to give the ability to give a i the ability to reason and plan in a reliable manner. the reason we started a company is actually more relevant today than it was back in 2017. our goal is to bridge the gap between demonstrated capabilities to a i systems that are ready for production deployment.
we first trained ll ms to build virtue, an intelligent reading and writing assistant app. virtue has reached over 10 million users in large part due to its accuracy and truthfulness with virtue traction enterprises started asking us how can we embed our ll ms into our own applications?
so we launched our api s jurassic models come in two versions ultra and mid. uh and they are both available on amazon bedrock ultra is the powerful for more complex tasks and m is the optimal uh balance between quality and cost. customers are using jurassic for text generation question answering summarization and classification.
for example, a leading sports retailer uses jurassic for generating customized product descriptions with their own unique tone length and purpose.
earlier, francesca showed you how easy it is to access our models through bedrock to use jurassic ultra or me models. all you'd have to do is to change the code um to use the same, the model id and your prompt. and that's, that's it really easy. we also offer our python sdk for developer convenience.
we've learned a lot from our jurassic models and notice the most common use cases such as grounded question answering, summarization, rewriting and so forth. so we decided to focus on these use cases and particularly on four characteristics, accuracy, costs, latency and ease of use. we call them task specific models.
in fact, these models go way beyond ll ms to handle some of the complexities that customer face such as r a input output validation and evaluation. you can access the task specific models through stage maker jumpstart, deploying them in your vpc. they are also available on our studio platform.
what are test specific models let's run through one of them. the contextual answers model performs question answering grounded in your organizational data. it's basically a rag solution so that when users ask a question answers will be determined according to your knowledge base reports, manuals and playbooks.
our models were designed to support rag in lead industry benchmark and um and, and they are leading in benchmarks for model accuracy. we built in functionality to validate the answers to ensure accurate, comprehensive and grounded responses. this is key for minimizing hallucinations and ridiculous mistakes that uh are common in some of these general generative a i models.
today, we, we have a saying in the company, no generation without validation. and so the request to the model is simple, no knowledge in machine learning is needed. you don't need to deal with fine tuning or even prompt engineering. just provide the question and the context and we take care of the rest.
for example, if you're an insurance company, your customers can ask questions about their policies and receive a tailor response based on their specific coverage, exclusions and limitations.
we're seeing tremendous customer interest with task specific models. customers are excited about the accuracy, improved, cost, reduced latency and ease of use. and since they accept input rather than prompts, uh they're easier, uh they're safer, they're unable to steer away from their defined task. it's perfect for production use cases.
through our close collaboration with customers, we've helped them identify and solve real problems, real business problems using task specific models.
customer support is the ba is, is basically the backbone of 10. a digital first bank. they needed a chat bot to provide exceptional customer experience, streamline their support operations and invest more time in human judgment and creativity. together. we built a chat bot that redefines customer service in the banking industry.
here's how we built a solution with 10 with our contextual answers model. the basic flow is that customer inputs their query into the chatbot. jurassic models then classifies the questions from the query and extract it into a structured object. the system then performs semantic search to identify the most relevant information. this can be done uh with amazon kendra or other vector search engines.
the user's question, retrieve data and retrieve data are sent to our contextual answer model. the model then generates an answer based on the retrieve data. this is how the model ensures that the answers are actually grounded in customers banking information and not general knowledge
After the contextual answers model generates its answer, Jurassic LLM rephrases it into a multi-turn response and relays it back to the user. You may ask, why is this architecture preferable to a chat model? The answer is Itaú is a financial institution and for them explainability is critical. This architecture ensures each step is optimized for their specific needs and can be validated and monitored.
AWS and Anthropic have a close relationship. Customers can leverage our generative AI capabilities while keeping security and privacy at the forefront. With Amazon Bedrock and SageMaker JumpStart, companies can easily access our models without leaving their AWS environment or worrying about managing their infrastructure.
A variety of models give customers balance and choice across quality, cost and latency. Together, AWS and Anthropic offer large, consistent inference workloads and guaranteed throughput at scale. With the shift of enterprises from experimentation to adoption, we’ll see increased demand for reliability. That's the key part. We think the big way to get there is not just by making more capable models but actually providing purpose-built AI systems around those models.
So we are seeing early glimpses of that with our task-specific models. AI will not just be demonstrating impressive skills but focused on capabilities that provide real solutions for customers and end users. We predict that all-in-one AGI models may actually not be practical in the long run. Organizations will maintain multiple models siloed by domain. We call those language blades. To learn more about us, visit Anthropic.com or speak with an Anthropic consultant here at re:Invent in our booth, #205. I promise you all just tell a lens, no drama. Thank you very much. I love that. Just LLM, no drama.
As you heard, foundation models can be powerful out of the box and truly useful to your organization. However, they need to access the right data sources. As I mentioned earlier, your data is your differentiator. To ensure you have relevant, high quality data to train your own models or customize foundation models for your use cases, you really need a strong data foundation.
The cloud has changed the way so many of us do business. The cost of compute and storage has come way down. As a result, businesses are storing more data than ever before. Most of you are storing terabytes, if not petabytes, of data. To solve these challenges, your modern data strategy needs to scale and be flexible enough to address various use cases. It also needs to support future projects.
Our mental model at AWS is simple - we want you to have comprehensive tools for each use case, the ability to connect all your data, and end-to-end governance. We offer a broad set of tools to support the end-to-end data journey. The good news is your investments in data services will serve you well in generative AI.
Despite the capabilities of foundation models, they do not have up-to-date private or specific knowledge about your organization and customers. Customers want to combine powerful foundation models with their own data. There are three main approaches:
-
Build your own model from scratch. Costly and time intensive but some will do this given the unique nature of their data.
-
Fine tuning - Tweak a pre-trained foundation model with your own data, resulting in a modified model. Fine tuning makes it faster to produce customized models.
-
Use foundation models out of the box with context learning like RAG.
RAG enhances responses by retrieving relevant info from a database, making the model more knowledgeable about your domain and helping generate accurate, relevant responses. Because we know what data was used, we can cite it in the generated responses, making the output more transparent.
Customers are mainly storing RAG data as vector embeddings - translating words into numbers to capture meaning and relationships. While not new, vector embeddings are becoming more important. That's why we now offer embedding support in services like OpenSearch and PostgreSQL.
While powerful, foundation models still require manual programming for complex tasks like booking flights. That's because out-of-the-box they can't take specific actions to fulfill requests. This is where agents come in. Agents execute tasks by calling APIs and systems. Fully managed agents extend the reasoning of foundation models to create and execute orchestration plans.
Promoting responsible AI is a top priority for AWS. We take a people-centric approach, integrating responsible AI across the machine learning life cycle. Yesterday we announced Guardrails for Amazon Bedrock. Guardrails let you implement safeguards aligned with company policies, providing additional control over interactions.
Bedrock also offers capabilities to support security and privacy requirements. The service is HIPAA eligible and GDPR compliant. Your content is not used to improve base models or shared with third parties. Data is encrypted in transit and at rest, and you can encrypt with your own keys. You can use AWS PrivateLink to establish private connectivity without exposing traffic to the internet.
I'd now like to invite Moses Nascimento, CTO of Itaú, to share how they're using data and AI to transform their business.
Moses:
It’s a pleasure to share how we architected our data and AI infrastructure to innovate with AI at scale. I've spent my career in Brazil and the US, working on large-scale solutions by designing, implementing and tuning systems. So when I was invited 5 years ago to help lead Itaú's digital transformation and modernize our data and analytics platforms, I was excited by the scale of this 100-year old organization with over 70 million customers, 100,000 employees, and Latin America’s most valuable brand.
Itaú has double-digit petabytes of information that can benefit our customers. To leverage this, we first moved to the cloud once AWS was authorized for financial institutions in Brazil. We closed a partnership with AWS to evolve operations and modernize how we build infrastructure and digital products.
In 4 years we’ve modernized over 50% of our systems, representing 70% of our most competitive services, while reducing incidents by 98%. We also modernized our data infrastructure and governance to be agile and scalable. Our data mesh now has 8 petabytes of data with thousands of producers/consumers.
It was important to empower each business unit to innovate with data and AI autonomously while keeping customer data integrated to understand behavior across services. Our data strategy uses a data mesh architecture with a control layer for governance, security and privacy. This eliminates ETL, allowing us to ingest data once and use it multiple times.
The control layer uses AWS Glue and Lake Formation for metadata, governance and security. On the producer side, we use solutions like Glue Jobs, EMR and Redshift to ingest and process data. On the consumer side, business units can use Athena and QuickSight out of the box for analytics and exploration.
With the data and infrastructure in place, we built a scalable AI platform called Yara to connect the ML lifecycle. It delivers frameworks and tools for feature stores, risk analysis, deployment orchestration, and observability. We leverage SageMaker heavily and cloud components to automate and accelerate AI innovation. This decreased our idea-to-value time significantly.
With the arrival of generative AI, we created a sandbox to experiment and learn. We structured an architecture with the same cloud principles: a control layer to automate and reuse configurations/templates; a data layer to integrate and transform data; and an application layer per use case with frameworks for prompt engineering and data creation using Hugging Face, SageMaker, JumpStart and Amazon Bedrock.
Our governance process catalogs ideas, brainstorms with business units, and prioritizes rapid development. Some examples:
-
Our legal team has used generative AI since early 2022 to interpret and classify 70,000+ processes per month at 99% accuracy. We’ll soon apply it to read thousands of sentences. This brings efficiency and cost savings.
-
In September, a new AI feature was released to investment clients so they can understand how events affect their portfolio. It suggests ways to address events.
-
We built an AI agent that helps identify and implement AWS resource optimization, expecting 40%+ cost reductions in some cases.
With a flexible platform, we’ll continue innovating to deliver the same client-centricity that has sustained us for 100 years. Our goal is to keep evolving our platform to deliver better products and experiences every day.
It's great to see how Itaú is using data and AI securely and responsibly to create better experiences. So far, we've explored building generative AI easily, providing choice through Bedrock and SageMaker JumpStart, data strategy choices, and the importance of responsible AI. Finally, there's no generative AI without the cloud. Making the most of gen AI requires more than just tools alone.
A successful strategy needs a strong foundation of time tested infrastructure that can support the massive scale, power, security and reliability of enterprise applications, along with the elasticity and the speed of the cloud. AWS offers a global cloud infrastructure that we believe you can depend on.
Our cloud spans 102 availability zones across 32 geographic regions all over the world and over 450 points of presence. There's no other cloud provider that offers that many regions with multiple availability zones that are designed with the highest standards for resilience.
And when it comes to custom silicon, AWS has been investing with our partners in designing our own chips for more than a decade, in order to offer a broad choice of high performance, low cost machine learning infrastructure options.
We were the first to bring NVIDIA GPUs to the cloud more than 12 years ago, and companies have been using GPU based instances to train models to become a lot faster. And today those same customers are scaling those training workloads to more than 10,000 GPUs.
We design our own AI chips. And as a result, customers want the most up to date solutions that help them gain that strategic edge by training models faster and at scale. The Inf1 instances deliver up to 40% better inference price performance than any other comparable EC2 instances.
Our Trn1 instances deliver up to 50% savings on training costs. And as you heard from Adam and Swami's keynote, the launch of our Trainium 2 chips, which is our second generation purpose built for high performance training of foundation models.
Trainium 2 will power the next generation of P4d ultra clusters to deliver up to 65 exaflops of aggregate compute. Our customers are seeing impressive results as they train deep learning models and foundation models for their use with generative AI applications.
Our commitment to you doesn't stop at the services we provide. Through our new AWS Generative AI Innovation Center, we are working deeply with customers and partners to accelerate success with tools like Bedrock and bring your expert practitioners and data scientists and hundreds of industry use cases that can help you rapidly deploy gen AI into production.
So I'm very excited to introduce our final presenter here to talk about their cloud architecture and how they've helped build a world class gen AI capability to disrupt their customer experience and improve developer productivity. Please welcome Dr. Jens Cole, Head of Offboard Architecture from the BMW Group.
Jens: Thanks Francesca for the introduction. Hi Vegas! Hi everyone. My name is Jan Schule. I'm Head of Offboard Architecture at the BMW and as Offboard Architecture, we describe all the stuff happening inside the cloud. I'm really excited to be here and I want to show you what we have been doing in the last weeks and months.
Our passion at the BMW Group is to offer our customers premium experiences. I mean, you know, when you drive off one of our vehicles, yeah, ultimate driving machine, sheer driving pleasure. These are core to our DNA. For digital services, this is a little bit more different.
Um we like to, to, to define our our digital services like in several aspects. And one aspect which is really nice is that we try to make the driving experience even more enjoyable, even more delightful. And a good example for this is how we use Amazon Fire TV in the back seats of our i7. And another example is our intelligent personal assistant powered by Alexa which offers you the customer to enjoy and use all of the offering we have inside our vehicles while focusing on the street because you can control all the offering um just by speech.
Also a really nice um or also like a a guideline for the for premium services that we want to offer personalized and intelligent experiences. And we do this by leveraging massive amounts of data while of course ensuring data privacy and we leverage this these data in our cloud back ends hosted on AWS.
A good example, one of my favorite services for this is our route optimized for charging. And what it does is it offers you depending on the charging state of your vehicle, your personal driving style and some other things like available charging stations or the traffic, it offers you an optimized route.
And I think those services I mentioned show clearly what makes for us the bar for a premium service. It's only possible to have a premium service if you have a great seamless interplay between the vehicle and our cloud back ends.
Now, this might sound easy, but it's actually kind of a challenging task which some of the figures of our connected vehicle back end can show you. Almost 20 years ago, we started introducing the first connected vehicle by providing a SIM card to each vehicle. And I'm proud to say that we came quite a long way since then.
Nowadays, we have more than 20 million connected vehicles worldwide. Out of these 20 million vehicles, more than 6 million vehicles are fully upgradable over the air, which we do regularly. And this is the benchmark in the automotive industry. There's no other automotive OEM who has a higher connected or has a larger connected vehicle fleet than we do.
All in all those 20 million vehicles generate on a daily basis, 12 billion requests a day. So just like now in a second, already, 140,000 requests already gone. These 12 billion requests are input for more than 1000 microservices and all in all we generate a traffic of 110 terabyte a day.
And this, this is what I'm really proud of. We achieve this while ensuring a 99.95% reliability. And with our upcoming Neue Klasse architecture which will hit the market in two years, those figures will even triple.
Now, the question is, and the challenge and the task we are facing is how do you continuously optimize the back end? How do you continuously deliver services that the customers are happy? And how can we continuously raise the bar regarding quality sustainability, reliability while at the same time managing the cost or even reducing them?
And this is what my team of architects and developers are working together on a daily basis with more than 450 devs teams worldwide. Let me show you how we tackle this with such a complex back. I think we all agree. Automation is clearly the only feasible way. So we set up a flywheel to drive this process and pursue this.
We started the workflow by measuring using Trusted Advisor and AWS Config and the rules to check all our accounts against these rules. Based on these results, we try to gain insights and identify actionable items which could help us to know how can we optimize our accounts, how can we fix the issues?
But the problem is this is not really scalable. This is what all you guys know when you use Trust Advisor, you can do this, you can do this for one account, but it's not capable of scaling up to the level of an organization.
We have, we try to automate a lot of these things. But clearly, the problem is if you look at the last three symbols, this is the bottleneck. Regardless of what you do. It's difficult to automate gaining insights, identifying what you should do or optimize accounts.
Now, isn't there something which might help in that? Yes, we thought so as well. That's why we trained and build a generative AI to help us addressing the two issues I mentioned to you which hindered us in scaling.
The first thing is the board we build is capable of explaining each finding to a user, including remedies and the second thing. And this is the real big one for us. It also offers to implement code in Python or in Terraform to show or to give the user resolution. And if the user wants to, he implements this.
Now, I've talked a lot about this. Let's see the board in action. We deliberately chose an example of an underutilized EC2 instance because it happens everywhere in the world. It's quite easy to explain, but it's not really that easy to do. So, let's see how the bot tackles this problem.
We start by asking the bot, what is the status of our test account? We set up, you see the bot checks the account says, ok, these are some issues i detected. And then as a user, i can say, hey, how can i resolve low utilization e2?
The bob then checks something and gives me already an explanation. You could do this and the code in Python and a Terraform script which I can directly copy and use to fix the problem. But now we went one step further and said, hey, if the bot already tells me what to do, why can't the bot do it by himself?
And that's what you can see. I can as a user say, please fix this stuff and give me a call back. So i know that the problem has been solved and i can work on other problems. Great, isn't it?
Alright, let's dive a little bit into the architecture of the bot. I mean, you can see it here as a principal architecture, of course. Um I'd like to mention one or two things before I delve into detail.
First of all, it's a fully serverless managed architecture which saves us a lot of cost. The bot should be ready when the teams need it, not the other way around. And the second thing is also something Francesca already mentioned today by hosting or by using the bot on Amazon Bedrock in a hosted BMW account on AWS.
We ensure that no data is transferred outside of our accounts during either training or inference of the model. And for us as BMW data privacy is essential. So that was really a game changer for us.
Now, let's go into a little bit more under the hood where the board does the real action. As you can see here, these are the four tasks the board has to do and for each of those tasks models perform differently. That's why we set up a multi agent structure of the system so that we can use and we can use different models for the different tasks.
And as Francesca and or you already mentioned today, it's quite easy in Bedrock to exchange or to update the model, you just exchange an API. This also helps us to use always the latest models. And at the same time, we avoid a vendor lock in with a specific model.
And since we're using fine tune models, we also save a lot of costs. And the second thing you heard it quite often today, retrieval augmented generation or RAG, we use it for two reasons. One of one of the reasons is the service landscape of ours.
And if Amazon is constantly evolving, constantly developing, so if we were to use like a normal foundation model, we'd have to retrain the model continuously. By using the RAG structure, we are capable of storing all the documentation of AWS, BMW best practices as well as other documentation in an S3 bucket, then Kendra can check if there is a specific solution or remedy for issue we are finding.
And i said the great thing is we not only automatically can grab the documentation, we can also add other documentation to this.
Now, let me come to the benefits. First of all, the bot helps us to scale cloud governance really up to the level of an organization as the back end we have. Second, it supports our dev ops teams because it helps them to facilitate the government's workflow of their accounts. It helps them to focus on the business logic to do the things with which they really can differentiate from the competition.
And since the bot is capable to explain all the the findings, it helps us to facilitate continuous learning in our organization regarding the cloud. And the last advantage is the bot is easy to maintain since we use a lot of managed service and also easy to extend.
We're currently working on extending the bot to check for instance, CloudWatch logs or to check components directly itself. But this may be something for next year's Reinvent.
I'm coming back to the title of Francesca's presentation from hype to impact for us. The bot shows that gen AI can really live up to the hype it has because using this board already delivered huge amount and huge impact for us as an organization.
Now let me come to the end of my presentation by giving you an outlook at the BMW Group. We have been using artificial intelligence for quite a long term today. I was, I'm proud and honored to show you. But one example where we are using generative AI in our company to improve our workflows.
We're currently working on several use cases along the entire value chain of BMW to improve our workflows and to prove and to improve our products. So stay tuned for more things to come.
Thank you. Goodbye. Service and enjoy reinventing goes. I was thinking that if we get everyone to build on generative AI, we all get BMWs ok. I'm just teasing.
Hey, that was amazing. Thank you so much, Jens and the BMW Group for your continued innovation and the automotive industry and how the cloud has helped you scale and transform.
So everyone before we wrap logistically, your questions might be what's next and how do I get started in my own organization? We believe that AWS has everything that you need to accelerate your generative AI journey from foundation models to services, to infrastructure training and even code samples.
We also have amazing customers and partners building with gen AI architectures as you've heard from Ori Moses and also Jens. I'm excited to see what all of you will do to reimagine and transform your applications on AWS.文章来源:https://www.toymoban.com/news/detail-762172.html
Thank you for your time today. Enjoy re invent 2023. Thank you.文章来源地址https://www.toymoban.com/news/detail-762172.html
到了这里,关于From hype to impact: Building a generative AI architecture的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!