Class and Description |
---|
ChatResult
The response to the chat conversation.
|
GenerateTextResult
The generated text result to return.
|
Class and Description |
---|
BaseChatRequest
Base class for chat inference requests
Note: Objects should always be created or deserialized using the Builder . |
BaseChatRequest.ApiFormat
The api format for the model’s request
|
BaseChatResponse
Base class for chat inference response
Note: Objects should always be created or deserialized using the Builder . |
BaseChatResponse.ApiFormat
The api format for the model’s response
|
ChatChoice
Represents a single instance of the chat response.
|
ChatChoice.Builder |
ChatContent
The base class for the chat content.
|
ChatContent.Type
The type of the content.
|
ChatDetails
Details of the conversation for the model to respond.
|
ChatDetails.Builder |
ChatResult
The response to the chat conversation.
|
ChatResult.Builder |
Choice
Represents a single instance of generated text.
|
Choice.Builder |
Citation
A section of the generated reply which cites external knowledge.
|
Citation.Builder |
CohereChatRequest
Details for the chat request for Cohere models.
|
CohereChatRequest.Builder |
CohereChatResponse
The response to the chat conversation.
|
CohereChatResponse.Builder |
CohereChatResponse.FinishReason
Why the generation was completed.
|
CohereLlmInferenceRequest
Details for the text generation request for Cohere models.
|
CohereLlmInferenceRequest.Builder |
CohereLlmInferenceRequest.ReturnLikelihoods
Specifies how and if the token likelihoods are returned with the response.
|
CohereLlmInferenceRequest.Truncate
For an input that’s longer than the maximum token length, specifies which part of the input
text will be truncated.
|
CohereLlmInferenceResponse
The generated text result to return.
|
CohereLlmInferenceResponse.Builder |
CohereMessage
An message that represents a single dialogue of chat
Note: Objects should always be created or deserialized using the CohereMessage.Builder . |
CohereMessage.Builder |
CohereMessage.Role
One of CHATBOT|USER to identify who the message is coming from.
|
DedicatedServingMode
The model’s serving mode is dedicated serving and has an endpoint on a dedicated AI cluster.
|
DedicatedServingMode.Builder |
EmbedTextDetails
Details for the request to embed texts.
|
EmbedTextDetails.Builder |
EmbedTextDetails.InputType
Specifies the input type.
|
EmbedTextDetails.Truncate
For an input that’s longer than the maximum token length, specifies which part of the input
text will be truncated.
|
EmbedTextResult
The generated embedded result to return.
|
EmbedTextResult.Builder |
GeneratedText
The text generated during each run.
|
GeneratedText.Builder |
GenerateTextDetails
Details for the request to generate text.
|
GenerateTextDetails.Builder |
GenerateTextResult
The generated text result to return.
|
GenerateTextResult.Builder |
GenericChatRequest
Details for the chat request.
|
GenericChatRequest.Builder |
GenericChatResponse
The response to the chat conversation.
|
GenericChatResponse.Builder |
LlamaLlmInferenceRequest
Details for the text generation request for Llama models.
|
LlamaLlmInferenceRequest.Builder |
LlamaLlmInferenceResponse
The generated text result to return.
|
LlamaLlmInferenceResponse.Builder |
LlmInferenceRequest
The base class for the inference requests.
|
LlmInferenceRequest.RuntimeType
The runtime of the provided model.
|
LlmInferenceResponse
The base class for inference responses.
|
LlmInferenceResponse.RuntimeType
The runtime of the provided model.
|
Logprobs
Returns if the logarithmic probabilites is set.
|
Logprobs.Builder |
Message
An message that represents a single dialogue of chat
Note: Objects should always be created or deserialized using the Message.Builder . |
Message.Builder |
OnDemandServingMode
The model’s serving mode is on-demand serving on a shared infrastructure.
|
OnDemandServingMode.Builder |
SearchQuery
The generated search query.
|
SearchQuery.Builder |
ServingMode
The model’s serving mode, which could be on-demand serving or dedicated serving.
|
ServingMode.ServingType
The serving mode type, which could be on-demand serving or dedicated serving.
|
SummarizeTextDetails
Details for the request to summarize text.
|
SummarizeTextDetails.Builder |
SummarizeTextDetails.Extractiveness
Controls how close to the original text the summary is.
|
SummarizeTextDetails.Format
Indicates the style in which the summary will be delivered - in a free form paragraph or in
bullet points.
|
SummarizeTextDetails.Length
Indicates the approximate length of the summary.
|
SummarizeTextResult
Summarize text result to return to caller.
|
SummarizeTextResult.Builder |
TextContent
Represents a single instance of text chat content.
|
TextContent.Builder |
TokenLikelihood
An object that contains the returned token and its corresponding likelihood.
|
TokenLikelihood.Builder |
Class and Description |
---|
ChatDetails
Details of the conversation for the model to respond.
|
EmbedTextDetails
Details for the request to embed texts.
|
GenerateTextDetails
Details for the request to generate text.
|
SummarizeTextDetails
Details for the request to summarize text.
|
Class and Description |
---|
ChatResult
The response to the chat conversation.
|
EmbedTextResult
The generated embedded result to return.
|
GenerateTextResult
The generated text result to return.
|
SummarizeTextResult
Summarize text result to return to caller.
|
Copyright © 2016–2024. All rights reserved.