# google.generativeai.ChatSession Contains an ongoing conversation with the model. ``` >>> model = genai.GenerativeModel('models/gemini-1.5-flash') >>> chat = model.start_chat() >>> response = chat.send_message("Hello") >>> print(response.text) >>> response = chat.send_message("Hello again") >>> print(response.text) >>> response = chat.send_message(... ``` This `ChatSession` object collects the messages sent and received, in its ChatSession.history attribute.
`model` The model to use in the chat.
`history` A chat history to initialize the object with.
`history` The chat history.
`last` returns the last received `genai.GenerateContentResponse`
## Methods

rewind

View source Removes the last request/response pair from the chat history.

send_message

View source Sends the conversation history with the added message and returns the model's response. Appends the request and response to the conversation history. ``` >>> model = genai.GenerativeModel('models/gemini-1.5-flash') >>> chat = model.start_chat() >>> response = chat.send_message("Hello") >>> print(response.text) "Hello! How can I assist you today?" >>> len(chat.history) 2 ``` Call it with `stream=True` to receive response chunks as they are generated: ``` >>> chat = model.start_chat() >>> response = chat.send_message("Explain quantum physics", stream=True) >>> for chunk in response: ... print(chunk.text, end='') ``` Once iteration over chunks is complete, the `response` and `ChatSession` are in states identical to the `stream=False` case. Some properties are not available until iteration is complete. Like GenerativeModel.generate_content this method lets you override the model's `generation_config` and `safety_settings`.
Arguments
`content` The message contents.
`generation_config` Overrides for the model's generation config.
`safety_settings` Overrides for the model's safety settings.
`stream` If True, yield response chunks as they are generated.

send_message_async

View source The async version of ChatSession.send_message.