google.generativeai.create_tuned_model#
View source on GitHub |
Calls the API to initiate a tuning process that optimizes a model for specific data, returning an operation object to track and manage the tuning progress.
google.generativeai.create_tuned_model(
source_model: model_types.AnyModelNameOptions,
training_data: model_types.TuningDataOptions,
*,
id: (str | None) = None,
display_name: (str | None) = None,
description: (str | None) = None,
temperature: (float | None) = None,
top_p: (float | None) = None,
top_k: (int | None) = None,
epoch_count: (int | None) = None,
batch_size: (int | None) = None,
learning_rate: (float | None) = None,
input_key: str = 'text_input',
output_key: str = 'output',
client: (glm.ModelServiceClient | None) = None,
request_options: (helper_types.RequestOptionsType | None) = None
) -> operations.CreateTunedModelOperation
Since tuning a model can take significant time, this API doesn’t wait for the tuning to complete.
Instead, it returns a google.api_core.operation.Operation
object that lets you check on the
status of the tuning job, or wait for it to complete, and check the result.
After the job completes you can either find the resulting TunedModel
object in
Operation.result()
or palm.list_tuned_models
or palm.get_tuned_model(model_id)
.
my_id = "my-tuned-model-id"
operation = palm.create_tuned_model(
id = my_id,
source_model="models/text-bison-001",
training_data=[{'text_input': 'example input', 'output': 'example output'},...]
)
tuned_model=operation.result() # Wait for tuning to finish
palm.generate_text(f"tunedModels/{my_id}", prompt="...")
Args | |
---|---|
The name of the model to tune. |
|
The dataset to tune the model on. This must be either:
|
|
The model identifier, used to refer to the model in the API
|
|
A human-readable name for display. |
|
A description of the tuned model. |
|
The default temperature for the tuned model, see |
|
The default |
|
The default |
|
The number of tuning epochs to run. An epoch is a pass over the whole dataset. |
|
The number of examples to use in each training batch. |
|
The step size multiplier for the gradient updates. |
|
Which client to use. |
|
Options for the request. |
Returns | |
---|---|