=GPT3 (by OpenAI)

The =GPT3 formula allows you to use OpenAI's latest AI technology at scale

=GPT3: How-to Video

1. Set Your OpenAI API Key

At https://Overfit.AI
In Google Sheet
At https://Overfit.AI

Sign In at Overfit.AI To Update Your OpenAI API Key

In Google Sheet

Use the =API_OpenAI Formula to Set OpenAI API Key

This is an optional alternative method of authenticating your OpenAI API Key directly within your Google Sheet.

=API_OpenAI(key)

Example

=API_OpenAI("YOUR-OPENAI-KEY-HERE")

Find Your OpenAI API Key at Beta.OpenAI.com

(Personal > Account > API Keys)

Until our presets are officially approved by the OpenAI team, you must have OpenAI API access to use the =GPT3 formula. (Anyone can join the OpenAI waitlist)

Your OpenAI API Key will be stored securely as a Document Property of any Google Sheet you have authenticated with your Overfit.AI API Key. Anyone you allow to access this Google Sheet will be able to use your OpenAI Tokens and Overfit.AI Tasks.

2. Use the =GPT3 Formula in Google Sheets to get a response from the OpenAI API

Refer to the official OpenAI Documentation for completion, prompt, and parameters setting details.

=GPT3(prompt, max_tokens, engine, temperature, frequency_penalty, presence_penalty, top_p, stop, n, best_of, logit_bias, logprobs, echo, query)

=GPT3 Formula Example (simple)

=GPT3("The meaning of life is")

I am sending the prompt "The meaning of life is" to the GPT-3 API using the default curie engine and other default settings.

=GPT3 Formula Example (advanced)

=GPT3("The meaning of life is", 16, "davinci", 0.8, 0.4, 0.2, 1, ".", 1, 2, "'6342':-1, '1582':-10", 2, true, "$")

I am sending the prompt "The meaning of life is" to the GPT-3 API using the Davinci engine using a maximum of 16 tokens and stopping at the first . with a temperature of 0.8, frequency penalty of 0.4, presence penalty of 0.2, Top-P of 1. I only want the best of 1 result using a logit bias of "'6342':-1, '1582':-10" logprobs of 2, no echo, and the $ query to display the raw JSON results.

Variable

Default

Description

prompt

prompt is required

Your Website's CMS Collection ID - required

max_tokens

16

The maximum number of tokens to generate. Requests can use up to 2048 tokens shared between prompt and completion.

(One token is roughly 4 characters for normal English text)

engine

"curie"

GPT-3 engine to use.

Learn more & compare engines

temperature

1

What sampling temperature to use. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer.

We generally recommend altering this or top_p but not both.

frequency_penalty

0

Number between 0 and 1 that penalizes new tokens based on their existing frequency in the text so far. Decreases the model's likelihood to repeat the same line verbatim.

See more information about frequency and presence penalties.

presence_penalty

0

Number between 0 and 1 that penalizes new tokens based on whether they appear in the text so far. Increases the model's likelihood to talk about new topics.

See more information about frequency and presence penalties.

top_p

1

An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.

We generally recommend altering this or temperature but not both.

stop

null

Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.

n

1

How many completions to generate for each prompt.

Note: Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for max_tokens and stop.

best_of

1

Generates best_of completions server-side and returns the "best" (the one with the lowest log probability per token). Results cannot be streamed.

When used with n, best_of controls the number of candidate completions and n specifies how many to return – best_of must be greater than n.

Note: Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for max_tokens and stop.

logit_bias

null

Modify the likelihood of specified tokens appearing in the completion.

Read: Controlling GPT-3 with Logit Bias

logprobs

null

Include the log probabilities on the logprobs most likely tokens, as well the chosen tokens. For example, if logprobs is 10, the API will return a list of the 10 most likely tokens.

echo

false

If the prompt should be echoed back in addition to the completion

(false or true)

query

"$..text"

Your query of the API response

"$" = Returns raw API response

Goessner.net/articles/JsonPath