【Azure OpenAI】OpenAI Function Calling 101

这篇具有很好参考价值的文章主要介绍了【Azure OpenAI】OpenAI Function Calling 101。希望对大家有所帮助。如果存在错误或未考虑完全的地方,请大家不吝赐教,您也可以点击"举报违法"按钮提交疑问。

概述

本文是结合 github:OpenAI Function Calling 101在 Azure OpenAI 上的实现:

Github Function Calling 101

如何将函数调用与 Azure OpenAI 服务配合使用 - Azure OpenAI Service

使用像ChatGPT这样的llm的困难之一是它们不产生结构化的数据输出。这对于在很大程度上依赖结构化数据进行系统交互的程序化系统非常重要。例如,如果你想构建一个程序来分析电影评论的情绪,你可能必须执行如下提示:

prompt = f'''
Please perform a sentiment analysis on the following movie review:
{MOVIE_REVIEW_TEXT}
Please output your response as a single word: either "Positive" or "Negative". Do not add any extra characters.
'''

这样做的问题是,它并不总是有效。法学硕士学位通常会加上一个不受欢迎的段落或更长的解释,比如:“这部电影的情感是:积极的。”

prompt = f'''

Please perform a sentiment analysis on the following movie review:

{The Shawshank Redemption}

Please output your response as a single word: either "Positive" or "Negative". Do not add any extra characters.

'''
---
#OpenAI Answer
import nltk
from nltk.sentiment import SentimentIntensityAnalyzer

# Define the movie review
movie_review = "The Shawshank Redemption"

# Perform sentiment analysis
sid = SentimentIntensityAnalyzer()
sentiment_scores = sid.polarity_scores(movie_review)

# Determine the sentiment based on the compound score
if sentiment_scores['compound'] >= 0:
    sentiment = "Positive"
else:
    sentiment = "Negative"

# Output the sentiment
sentiment
---
The sentiment analysis of the movie review "The Shawshank Redemption" is "Positive".

虽然您可以通过正则表达式得到答案(🤢),但这显然不是理想的。理想的情况是LLM将返回类似以下结构化JSON的输出:

{
    'sentiment': 'positive'
}

进入OpenAI的新函数调用! 函数调用正是上述问题的答案。
本Jupyter笔记本将演示如何在Python中使用OpenAI的新函数调用的简单示例。如果你想看完整的文档,请点击这个链接https://platform.openai.com/docs/guides/gpt/function-calling

实验

初始化配置

安装openai Python客户端,已经安装的需要升级它以获得新的函数调用功能

pip3 install openai
pip3 install openai --upgrade
# Importing the necessary Python libraries
import os
import json
import yaml
import openai

结合Azure Openai 的内容,将 API 的设置按照 Azure 给的方式结合文件(放 key),同时准备一个about-me.txt

#../keys/openai-keys.yaml 
API_KEY: 06332xxxxxxcd4e70bxxxxxx6ee135
#../data/about-me.txt 
Hello! My name is Enigma Zhao. I am an Azure cloud engineer at Microsoft. I enjoy learning about AI and teaching what I learn back to others. I have two sons and a daughter. I drive a Audi A3, and my favorite video game series is The Legend of Zelda.
openai.api_version = "2023-07-01-preview"
openai.api_type = "azure"
openai.api_base = "https://aoaifr01.openai.azure.com/"

# Loading the API key and organization ID from file (NOT pushed to GitHub)
with open('../keys/openai-keys.yaml') as f:
    keys_yaml = yaml.safe_load(f)

# Applying our API key
openai.api_key = keys_yaml['API_KEY']
os.environ['OPENAI_API_KEY'] = keys_yaml['API_KEY']

# Loading the "About Me" text from local file
with open('../data/about-me.txt', 'r') as f:
    about_me = f.read()

测试Json 转换

在使用函数调用之前,先看一下如何使用提示工程和Regex生成一个结构JSON,在以后会使用到。

# Engineering a prompt to extract as much information from "About Me" as a JSON object
about_me_prompt = f'''
Please extract information as a JSON object. Please look for the following pieces of information.
Name
Job title
Company
Number of children as a single number
Car make
Car model
Favorite video game series

This is the body of text to extract the information from:
{about_me}
'''
# Getting the response back from ChatGPT (gpt-4)
openai_response = openai.ChatCompletion.create(
    engine="gpt-4",
    messages = [{'role': 'user', 'content': about_me_prompt}]
)

# Loading the response as a JSON object
json_response = json.loads(openai_response['choices'][0]['message']['content'])
print(json_response)

输出如下:

root@ubuntu0:~/python# python3 [fc1.py](http://fc1.py/)
{'Name': 'Enigma Zhao', 'Job title': 'Azure cloud engineer', 'Company': 'Microsoft', 'Number of children': 3, 'Car make': 'Audi', 'Car model': 'A3', 'Favorite video game series': 'The Legend of Zelda'}

简单的自定义函数

# Defining our initial extract_person_info function
def extract_person_info(name, job_title, num_children):
    '''
    Prints basic "About Me" information

    Inputs:
        name (str): Name of the person
        job_title (str): Job title of the person
        num_chilren (int): The number of children the parent has.
    '''

    print(f'This person\'s name is {name}. Their job title is {job_title}, and they have {num_children} children.')

# Defining how we want ChatGPT to call our custom functions
my_custom_functions = [
    {
        'name': 'extract_person_info',
        'description': 'Get "About Me" information from the body of the input text',
        'parameters': {
            'type': 'object',
            'properties': {
                'name': {
                    'type': 'string',
                    'description': 'Name of the person'
                },
                'job_title': {
                    'type': 'string',
                    'description': 'Job title of the person'
                },
                'num_children': {
                    'type': 'integer',
                    'description': 'Number of children the person is a parent to'
                }
            }
				}
    }
]  

openai_response = openai.ChatCompletion.create(
    engine="gpt-4",
    messages = [{'role': 'user', 'content': about_me}],
    functions = my_custom_functions,
    function_call = 'auto'
)

print(openai_response)

输出如下,OpenAI 讲 about_me 的内容根据 function 的格式进行梳理,调用 function 输出:

root@ubuntu0:~/python# python3 fc2.py 
{
  "id": "chatcmpl-7tYiDoEjPpzNw3tPo3ZDFdaHSbS5u",
  "object": "chat.completion",
  "created": 1693475829,
  "model": "gpt-4",
  "prompt_annotations": [
    {
      "prompt_index": 0,
      "content_filter_results": {
        "hate": {
          "filtered": false,
          "severity": "safe"
        },
        "self_harm": {
          "filtered": false,
          "severity": "safe"
        },
        "sexual": {
          "filtered": false,
          "severity": "safe"
        },
        "violence": {
          "filtered": false,
          "severity": "safe"
        }
      }
    }
  ],
  "choices": [
    {
      "index": 0,
      "finish_reason": "function_call",
      "message": {
        "role": "assistant",
        "function_call": {
          "name": "extract_person_info",
          "arguments": "{\n  \"name\": \"Enigma Zhao\",\n  \"job_title\": \"Azure cloud engineer\",\n  \"num_children\": 3\n}"
        }
      },
      "content_filter_results": {}
    }
  ],
  "usage": {
    "completion_tokens": 36,
    "prompt_tokens": 147,
    "total_tokens": 183
  }
}

在上面的示例中,自定义函数提取三个非常具体的信息位,通过传递自定义的“About Me”文本作为提示符来证明这可以成功地工作。

如果传入任何其他不包含该信息的提示,会发生什么?

在API客户端调用function_call中设置了一个参数,并将其设置为auto。这个参数实际上是在告诉ChatGPT在确定何时为自定义函数构建输出时使用它的最佳判断。

所以,当提交的提示符与任何自定义函数都不匹配时,会发生什么呢?
简单地说,它默认为典型的行为,就好像函数调用不存在一样。
我们用一个任意的提示来测试一下:“埃菲尔铁塔有多高?”,修改如下:

openai_response = openai.ChatCompletion.create(
    engine="gpt-4",
    messages = [{'role': 'user', 'content': 'How tall is the Eiffel Tower?'}],
    functions = my_custom_functions,
    function_call = 'auto'
)

print(openai_response)

输出结果

{
  "id": "chatcmpl-7tYvdaiKaoiWW7NXz8QDeZZYEKnko",
  "object": "chat.completion",
  "created": 1693476661,
  "model": "gpt-4",
  "prompt_annotations": [
    {
      "prompt_index": 0,
      "content_filter_results": {
        "hate": {
          "filtered": false,
          "severity": "safe"
        },
        "self_harm": {
          "filtered": false,
          "severity": "safe"
        },
        "sexual": {
          "filtered": false,
          "severity": "safe"
        },
        "violence": {
          "filtered": false,
          "severity": "safe"
        }
      }
    }
  ],
  "choices": [
    {
      "index": 0,
      "finish_reason": "stop",
      "message": {
        "role": "assistant",
        "content": "The Eiffel Tower is approximately 330 meters (1083 feet) tall."
      },
      "content_filter_results": {
        "hate": {
          "filtered": false,
          "severity": "safe"
        },
        "self_harm": {
          "filtered": false,
          "severity": "safe"
        },
        "sexual": {
          "filtered": false,
          "severity": "safe"
        },
        "violence": {
          "filtered": false,
          "severity": "safe"
        }
      }
    }
  ],
  "usage": {
    "completion_tokens": 18,
    "prompt_tokens": 97,
    "total_tokens": 115
  }
}

多样本情况

现在让我们演示一下,当我们将3个不同的样本应用于所有自定义函数时会发生什么。

# Defining a function to extract only vehicle information
def extract_vehicle_info(vehicle_make, vehicle_model):
    '''
    Prints basic vehicle information

    Inputs:
        - vehicle_make (str): Make of the vehicle
        - vehicle_model (str): Model of the vehicle
    '''
    
    print(f'Vehicle make: {vehicle_make}\nVehicle model: {vehicle_model}')

# Defining a function to extract all information provided in the original "About Me" prompt
def extract_all_info(name, job_title, num_children, vehicle_make, vehicle_model, company_name, favorite_vg_series):
    '''
    Prints the full "About Me" information

    Inputs:
        - name (str): Name of the person
        - job_title (str): Job title of the person
        - num_chilren (int): The number of children the parent has
        - vehicle_make (str): Make of the vehicle
        - vehicle_model (str): Model of the vehicle
        - company_name (str): Name of the company the person works for
        - favorite_vg_series (str): Person's favorite video game series.
    '''
    
    print(f'''
    This person\'s name is {name}. Their job title is {job_title}, and they have {num_children} children.
    They drive a {vehicle_make} {vehicle_model}.
    They work for {company_name}.
    Their favorite video game series is {favorite_vg_series}.
    ''')
# Defining how we want ChatGPT to call our custom functions
my_custom_functions = [
    {
        'name': 'extract_person_info',
        'description': 'Get "About Me" information from the body of the input text',
        'parameters': {
            'type': 'object',
            'properties': {
                'name': {
                    'type': 'string',
                    'description': 'Name of the person'
                },
                'job_title': {
                    'type': 'string',
                    'description': 'Job title of the person'
                },
                'num_children': {
                    'type': 'integer',
                    'description': 'Number of children the person is a parent to'
                }
            }
        }
    },
    {
        'name': 'extract_vehicle_info',
        'description': 'Extract the make and model of the person\'s car',
        'parameters': {
            'type': 'object',
            'properties': {
                'vehicle_make': {
                    'type': 'string',
                    'description': 'Make of the person\'s vehicle'
                },
                'vehicle_model': {
                    'type': 'string',
                    'description': 'Model of the person\'s vehicle'
                }
            }
        }
    },
    {
        'name': 'extract_all_info',
        'description': 'Extract all information about a person including their vehicle make and model',
        'parameters': {
            'type': 'object',
            'properties': {
                'name': {
                    'type': 'string',
                    'description': 'Name of the person'
                },
                'job_title': {
                    'type': 'string',
                    'description': 'Job title of the person'
                },
                'num_children': {
                    'type': 'integer',
                    'description': 'Number of children the person is a parent to'
                },
                'vehicle_make': {
                    'type': 'string',
                    'description': 'Make of the person\'s vehicle'
                },
                'vehicle_model': {
                    'type': 'string',
                    'description': 'Model of the person\'s vehicle'
                },
                'company_name': {
                    'type': 'string',
                    'description': 'Name of the company the person works for'
                },
                'favorite_vg_series': {
                    'type': 'string',
                    'description': 'Name of the person\'s favorite video game series'
                }
            }
        }
    }
]
# Defining a list of samples
samples = [
    str(about_me),
    'My name is David Hundley. I am a principal machine learning engineer, and I have two daughters.',
    'She drives a Kia Sportage.'
]

# Iterating over the three samples
for i, sample in enumerate(samples):
    
    print(f'Sample #{i + 1}\'s results:')

    # Getting the response back from ChatGPT (gpt-4)
    openai_response = openai.ChatCompletion.create(
        engine = 'gpt-4',
        messages = [{'role': 'user', 'content': sample}],
        functions = my_custom_functions,
        function_call = 'auto'
    )

    # Printing the sample's response
    print(openai_response)

输出如下:

root@ubuntu0:~/python# python3 [fc3.py](http://fc3.py/)
Sample #1's results:
{
"id": "chatcmpl-7tq3Nh7BSccfFAmT2psZcRR3hHpZ7",
"object": "chat.completion",
"created": 1693542489,
"model": "gpt-4",
"prompt_annotations": [
{
"prompt_index": 0,
"content_filter_results": {
"hate": {
"filtered": false,
"severity": "safe"
},
"self_harm": {
"filtered": false,
"severity": "safe"
},
"sexual": {
"filtered": false,
"severity": "safe"
},
"violence": {
"filtered": false,
"severity": "safe"
}
}
}
],
"choices": [
{
"index": 0,
"finish_reason": "function_call",
"message": {
"role": "assistant",
"function_call": {
"name": "extract_all_info",
"arguments": "{\n\"name\": \"Enigma Zhao\",\n\"job_title\": \"Azure cloud engineer\",\n\"num_children\": 3,\n\"vehicle_make\": \"Audi\",\n\"vehicle_model\": \"A3\",\n\"company_name\": \"Microsoft\",\n\"favorite_vg_series\": \"The Legend of Zelda\"\n}"
}
},
"content_filter_results": {}
}
],
"usage": {
"completion_tokens": 67,
"prompt_tokens": 320,
"total_tokens": 387
}
}
Sample #2's results:
{
"id": "chatcmpl-7tq3QUA3o1yixTLZoPtuWOfKA38ub",
"object": "chat.completion",
"created": 1693542492,
"model": "gpt-4",
"prompt_annotations": [
{
"prompt_index": 0,
"content_filter_results": {
"hate": {
"filtered": false,
"severity": "safe"
},
"self_harm": {
"filtered": false,
"severity": "safe"
},
"sexual": {
"filtered": false,
"severity": "safe"
},
"violence": {
"filtered": false,
"severity": "safe"
}
}
}
],
"choices": [
{
"index": 0,
"finish_reason": "function_call",
"message": {
"role": "assistant",
"function_call": {
"name": "extract_person_info",
"arguments": "{\n\"name\": \"David Hundley\",\n\"job_title\": \"principal machine learning engineer\",\n\"num_children\": 2\n}"
}
},
"content_filter_results": {}
}
],
"usage": {
"completion_tokens": 33,
"prompt_tokens": 282,
"total_tokens": 315
}
}
Sample #3's results:
{
"id": "chatcmpl-7tq3RxN3tCLKbPadWkGQguploKFw2",
"object": "chat.completion",
"created": 1693542493,
"model": "gpt-4",
"prompt_annotations": [
{
"prompt_index": 0,
"content_filter_results": {
"hate": {
"filtered": false,
"severity": "safe"
},
"self_harm": {
"filtered": false,
"severity": "safe"
},
"sexual": {
"filtered": false,
"severity": "safe"
},
"violence": {
"filtered": false,
"severity": "safe"
}
}
}
],
"choices": [
{
"index": 0,
"finish_reason": "function_call",
"message": {
"role": "assistant",
"function_call": {
"name": "extract_vehicle_info",
"arguments": "{\n  \"vehicle_make\": \"Kia\",\n  \"vehicle_model\": \"Sportage\"\n}"
}
},
"content_filter_results": {}
}
],
"usage": {
"completion_tokens": 27,
"prompt_tokens": 268,
"total_tokens": 295
}
}

对于每个相应的提示,ChatGPT选择了正确的自定义函数,可以特别注意到API的响应对象中function_call下的name值。

除了这是一种方便的方法来确定使用哪个函数的参数之外,我们还可以通过编程将实际的自定义Python函数映射到此值,以适当地选择运行正确的代码。

# Iterating over the three samples
for i, sample in enumerate(samples):
    
    print(f'Sample #{i + 1}\'s results:')

    # Getting the response back from ChatGPT (gpt-4)
    openai_response = openai.ChatCompletion.create(
        engine = 'gpt-4',
        messages = [{'role': 'user', 'content': sample}],
        functions = my_custom_functions,
        function_call = 'auto'
    )['choices'][0]['message']

    # Checking to see that a function call was invoked
    if openai_response.get('function_call'):

        # Checking to see which specific function call was invoked
        function_called = openai_response['function_call']['name']

        # Extracting the arguments of the function call
        function_args = json.loads(openai_response['function_call']['arguments'])

        # Invoking the proper functions
        if function_called == 'extract_person_info':
            extract_person_info(*list(function_args.values()))
        elif function_called == 'extract_vehicle_info':
            extract_vehicle_info(*list(function_args.values()))
        elif function_called == 'extract_all_info':
            extract_all_info(*list(function_args.values()))

输出如下:

root@ubuntu0:~/python# python3 fc4.py 
Sample #1's results:

    This person's name is Enigma Zhao. Their job title is Azure cloud engineer, and they have 3 children.
    They drive a Audi A3.
    They work for Microsoft.
    Their favorite video game series is The Legend of Zelda.
    
Sample #2's results:
This person's name is David Hundley. Their job title is principal machine learning engineer, and they have 2 children.
Sample #3's results:
Vehicle make: Kia
Vehicle model: Sportage

OpenAI Function Calling with LangChain

考虑到LangChain在生成式AI社区中的受欢迎程度,添加一些代码来展示如何在LangChain中使用这个功能。

注意在使用前要安装 LangChain

pip3 install LangChain
# Importing the LangChain objects
from langchain.chat_models import ChatOpenAI
from langchain.chains import LLMChain
from langchain.prompts.chat import ChatPromptTemplate
from langchain.chains.openai_functions import create_structured_output_chain
# Setting the proper instance of the OpenAI model
#llm = ChatOpenAI(model = 'gpt-3.5-turbo-0613')

**#model 格式发生改变,如果用ChatOpenAI(model = 'gpt-4'),系统会给警告
llm = ChatOpenAI(model_kwargs={'engine': 'gpt-4'})**

# Setting a LangChain ChatPromptTemplate
chat_prompt_template = ChatPromptTemplate.from_template('{my_prompt}')

# Setting the JSON schema for extracting vehicle information
langchain_json_schema = {
    'name': 'extract_vehicle_info',
    'description': 'Extract the make and model of the person\'s car',
    'type': 'object',
    'properties': {
        'vehicle_make': {
            'title': 'Vehicle Make',
            'type': 'string',
            'description': 'Make of the person\'s vehicle'
        },
        'vehicle_model': {
            'title': 'Vehicle Model',
            'type': 'string',
            'description': 'Model of the person\'s vehicle'
        }
    }
}
# Defining the LangChain chain object for function calling
chain = create_structured_output_chain(output_schema = langchain_json_schema,
                                       llm = llm,
                                       prompt = chat_prompt_template)
# Getting results with a demo prompt
print(chain.run(my_prompt = about_me))

输出如下:

root@ubuntu0:~/python# python3 fc0.py 
{'vehicle_make': 'Audi', 'vehicle_model': 'A3'}

小结

有几个和原文配置不一样的地方:

  1. 结合Azure Openai 的内容,将 API 的设置按照 Azure 给的方式结合文件(放 key),同时准备一个about-me.txt

    #../keys/openai-keys.yaml
    API_KEY: 06332xxxxxxcd4e70bxxxxxx6ee135
    
    #../data/about-me.txt
    Hello! My name is zhang san. I am an Azure cloud engineer at Microsoft. I enjoy learning about AI and teaching what I learn back to others. I have two sons and a daughter. I drive a Audi A3, and my favorite video game series is The Legend of Zelda.
    
  2. 基本配置按照 AOAI 的格式

    openai.api_version = "2023-07-01-preview"
    openai.api_type = "azure"
    openai.api_base = "https://aoaifr01.openai.azure.com/"
    #Applying our API key
    openai.api_key = keys_yaml['API_KEY']
    os.environ['OPENAI_API_KEY'] = keys_yaml['API_KEY'] #后面的多样本测试要用
    
  3. Function Call 的配置,将 model = ‘gpt-3.5-turbo’ 改为 engine=“gpt-4”

    openai_response = openai.ChatCompletion.create(
    engine="gpt-4",
    messages = [{'role': 'user', 'content': about_me}],
    functions = my_custom_functions,
    function_call = 'auto'
    
    
  4. OpenAI Function Calling with LangChain

  • 先要安装 LangChain

       pip3 install LangChain
    
  • model 格式发生改变,如果用原文的ChatOpenAI(model = ‘gpt-4’),系统会给警告文章来源地址https://www.toymoban.com/news/detail-693559.html

         llm = ChatOpenAI(model_kwargs={'engine': 'gpt-4'})
    

到了这里,关于【Azure OpenAI】OpenAI Function Calling 101的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处: 如若内容造成侵权/违法违规/事实不符,请点击违法举报进行投诉反馈,一经查实,立即删除!

领支付宝红包 赞助服务器费用

相关文章

  • 基于Azure实现Java访问OpenAI

            之前使用了Java代码访问OpenAI:OpenAI注册以及Java代码调用_雨欲语的博客-CSDN博客但是需要vpn才能访问,现在可以基于微软的Azure访问OpenAI,不再需要vpn,官方文档:快速入门 - 开始通过 Azure OpenAI 服务使用 ChatGPT 和 GPT-4 - Azure OpenAI Service | Microsoft Learn,官方对Python和C

    2024年02月16日
    浏览(35)
  • 来 Azure 学习 OpenAI 二 - 环境构建

    大家好,我是微软学生大使 Jambo 。如今,越来越多的人拥有多台开发设备,例如在家和办公室都有电脑。但是跨设备开发是件麻烦的事情,因为你不仅需要同步文件,还需要在不同的设备上安装相同的开发环境。而对于使用轻型设备,比如 Chromebook 和 iPad,复杂的开发环境也

    2024年02月08日
    浏览(43)
  • Azure Machine Learning - 使用 Azure OpenAI 服务生成图像

    在浏览器/Python中使用 Azure OpenAI 生成图像,图像生成 API 根据文本提示创建图像。 关注TechLead,分享AI全维度知识。作者拥有10+年互联网服务架构、AI产品研发经验、团队管理经验,同济本复旦硕,复旦机器人智能实验室成员,阿里云认证的资深架构师,项目管理专业人士,上

    2024年02月05日
    浏览(48)
  • 源码对接微软Azure OpenAI 规范注意点

    众所周知,我们是访问不通OpenAI官方服务的,但是我们可以自己通过代理或者使用第三方代理访问接口 现在新出台的规定禁止使用境外的AI大模型接口对境内客户使用,所以我们需要使用国内的大模型接口 国内的效果真的很差,现在如果想合规的使用GPT大模型,可以使用微软

    2024年02月16日
    浏览(41)
  • Azure Machine Learning - 使用自己的数据与 Azure OpenAI 模型对话

    在本文中,可以将自己的数据与 Azure OpenAI 模型配合使用。 对数据使用 Azure OpenAI 模型可以提供功能强大的对话 AI 平台,从而实现更快、更准确的通信。 关注TechLead,分享AI全维度知识。作者拥有10+年互联网服务架构、AI产品研发经验、团队管理经验,同济本复旦硕,复旦机器

    2024年02月04日
    浏览(44)
  • Azure Machine Learning - Azure OpenAI GPT 3.5 Turbo 微调教程

    本教程将引导你在Azure平台完成对 gpt-35-turbo-0613 模型的微调。 关注TechLead,分享AI全维度知识。作者拥有10+年互联网服务架构、AI产品研发经验、团队管理经验,同济本复旦硕,复旦机器人智能实验室成员,阿里云认证的资深架构师,项目管理专业人士,上亿营收AI产品研发负

    2024年02月04日
    浏览(60)
  • 微软Azure OpenAI服务-合规的GPT模型接口

    众所周知,我们是访问不通OpenAI官方服务的,但是我们可以自己通过代理或者使用第三方代理访问接口 现在新出台的规定禁止使用境外的AI大模型接口对境内客户使用,所以我们需要使用国内的大模型接口 国内的效果真的很差,现在如果想使用GPT大模型,可以使用微软Azure的

    2024年02月16日
    浏览(54)
  • 企业级 Azure OpenAI ChatGPT 服务发布(国际预览版)

    (本文翻译自微软全球技术博客) 今天,我们很高兴地宣布,基于 Microsoft Azure 的 企业级 Azure OpenAI ChatGPT 服务发布(国际预览版) 。借助 Azure OpenAI 独家服务,Azure 用户可以使用全球业界领先的AI模型(包括Dall-E 2、GPT-3.5、Codex 和其他由Azure特有的高性能和企业级云服务支撑

    2024年02月02日
    浏览(50)
  • 基于Azure OpenAI Service 的知识库搭建实验⼿册

    1. 概要         介绍如何使⽤Azure OpenAI Service 的嵌⼊技术,创建知识库;以及创建必要的资源组和资源,包括 Form Recognizer 资源和 Azure 翻译器资源。在创建问答机器⼈服务时,需要使⽤已部署模型的 Azure OpenAI 资源、已存在的表格识别资源和翻译资 源。通过 Azure ⾃定义资

    2024年02月14日
    浏览(40)
  • Azure Machine Learning - Azure OpenAI 服务使用 GPT-35-Turbo and GPT-4

    通过 Azure OpenAI 服务使用 GPT-35-Turbo and GPT-4 Azure 订阅 - 免费创建订阅 已在所需的 Azure 订阅中授予对 Azure OpenAI 服务的访问权限。 目前,仅应用程序授予对此服务的访问权限。 可以填写 https://aka.ms/oai/access 处的表单来申请对 Azure OpenAI 服务的访问权限。 Python 3.7.1 或更高版本。

    2024年02月05日
    浏览(51)

觉得文章有用就打赏一下文章作者

支付宝扫一扫打赏

博客赞助

微信扫一扫打赏

请作者喝杯咖啡吧~博客赞助

支付宝扫一扫领取红包,优惠每天领

二维码1

领取红包

二维码2

领红包