Use Gemini Pro 3 to generate human-like responses to a given prompt or question. Maximum response length varies by model.
β
This Wrk Action uses AI-generated content. Like all AI content, use with caution; information may be outdated, incomplete, or inaccurate.
Application
Google Gemini
Inputs (what you have)
Name | Description | Data Type | Required? | Example |
Prompt | The prompt to generate completions for. Input will be truncated if it exceeds the limit set by the API | Text (Long) | Yes |
|
System message | The message which sets the context and expectations for a conversation with the AI | Text (Long) | No |
|
Maximum output tokens | The maximum number of reasoning and output tokens that can be used in this Wrk Action. If the Generated text output is blank, try increasing this value | Number | No |
|
Temperature | Sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic | Number with decimals | No |
|
Thinking level | Constrains effort on thinking for thinking models. Reducing thinking effort can result in faster responses and fewer tokens used on thinking in a response. | Text (Short) | No |
|
Files | Files to be provided to the AI | List of undefined | No |
|
Include thoughts? | If set to true, the response will include the thoughts of the AI if available | True/False | No |
|
Return response as JSON? | If set to true, the response of the AI will return valid JSON | True/False | No |
|
Outputs (what you get)
Name | Description | Data Type | Required? | Example |
Generated text | Text content created by artificial intelligence | Text (Long) | No |
|
Stop reason | The reason the response was stopped | Text (Short) | No |
|
Outcomes
Name | Description |
Success | This status is selected if the job has successfully completed. |
Requirements
N/A
