Class: OCI::GenerativeAi::GenerativeAiClientCompositeOperations

Inherits:
Object
  • Object
show all
Defined in:
lib/oci/generative_ai/generative_ai_client_composite_operations.rb

Overview

This class provides a wrapper around GenerativeAiClient and offers convenience methods for operations that would otherwise need to be chained together. For example, instead of performing an action on a resource (e.g. launching an instance, creating a load balancer) and then using a waiter to wait for the resource to enter a given state, you can call a single method in this class to accomplish the same functionality

Instance Attribute Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(service_client = OCI::GenerativeAi::GenerativeAiClient.new) ⇒ GenerativeAiClientCompositeOperations

Initializes a new GenerativeAiClientCompositeOperations

Parameters:



22
23
24
# File 'lib/oci/generative_ai/generative_ai_client_composite_operations.rb', line 22

def initialize(service_client = OCI::GenerativeAi::GenerativeAiClient.new)
  @service_client = service_client
end

Instance Attribute Details

#service_clientOCI::GenerativeAi::GenerativeAiClient (readonly)

The OCI::GenerativeAi::GenerativeAiClient used to communicate with the service_client



16
17
18
# File 'lib/oci/generative_ai/generative_ai_client_composite_operations.rb', line 16

def service_client
  @service_client
end

Instance Method Details

#create_dedicated_ai_cluster_and_wait_for_state(create_dedicated_ai_cluster_details, wait_for_states = [], base_operation_opts = {}, waiter_opts = {}) ⇒ OCI::Response

Calls OCI::GenerativeAi::GenerativeAiClient#create_dedicated_ai_cluster and then waits for the Models::WorkRequest to enter the given state(s).

Parameters:

Returns:



41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
# File 'lib/oci/generative_ai/generative_ai_client_composite_operations.rb', line 41

def create_dedicated_ai_cluster_and_wait_for_state(create_dedicated_ai_cluster_details, wait_for_states = [], base_operation_opts = {}, waiter_opts = {})
  operation_result = @service_client.create_dedicated_ai_cluster(create_dedicated_ai_cluster_details, base_operation_opts)
  use_util = OCI::GenerativeAi::Util.respond_to?(:wait_on_work_request)

  return operation_result if wait_for_states.empty? && !use_util

  lowered_wait_for_states = wait_for_states.map(&:downcase)
  wait_for_resource_id = operation_result.headers['opc-work-request-id']
  return operation_result if wait_for_resource_id.nil? || wait_for_resource_id.empty?

  begin
    if use_util
      waiter_result = OCI::GenerativeAi::Util.wait_on_work_request(
        @service_client,
        wait_for_resource_id,
        max_interval_seconds: waiter_opts.key?(:max_interval_seconds) ? waiter_opts[:max_interval_seconds] : 30,
        max_wait_seconds: waiter_opts.key?(:max_wait_seconds) ? waiter_opts[:max_wait_seconds] : 1200
      )
    else
      waiter_result = @service_client.get_work_request(wait_for_resource_id).wait_until(
        eval_proc: ->(response) { response.data.respond_to?(:status) && lowered_wait_for_states.include?(response.data.status.downcase) },
        max_interval_seconds: waiter_opts.key?(:max_interval_seconds) ? waiter_opts[:max_interval_seconds] : 30,
        max_wait_seconds: waiter_opts.key?(:max_wait_seconds) ? waiter_opts[:max_wait_seconds] : 1200
      )
    end
    result_to_return = waiter_result

    return result_to_return
  rescue StandardError
    raise OCI::Errors::CompositeOperationError.new(partial_results: [operation_result])
  end
end

#create_endpoint_and_wait_for_state(create_endpoint_details, wait_for_states = [], base_operation_opts = {}, waiter_opts = {}) ⇒ OCI::Response

Calls OCI::GenerativeAi::GenerativeAiClient#create_endpoint and then waits for the Models::WorkRequest to enter the given state(s).

Parameters:

  • create_endpoint_details (OCI::GenerativeAi::Models::CreateEndpointDetails)

    Details for the new endpoint.

  • wait_for_states (Array<String>) (defaults to: [])

    An array of states to wait on. These should be valid values for Models::WorkRequest#status

  • base_operation_opts (Hash) (defaults to: {})

    Any optional arguments accepted by OCI::GenerativeAi::GenerativeAiClient#create_endpoint

  • waiter_opts (Hash) (defaults to: {})

    Optional arguments for the waiter. Keys should be symbols, and the following keys are supported: * max_interval_seconds: The maximum interval between queries, in seconds. * max_wait_seconds The maximum time to wait, in seconds

Returns:



91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
# File 'lib/oci/generative_ai/generative_ai_client_composite_operations.rb', line 91

def create_endpoint_and_wait_for_state(create_endpoint_details, wait_for_states = [], base_operation_opts = {}, waiter_opts = {})
  operation_result = @service_client.create_endpoint(create_endpoint_details, base_operation_opts)
  use_util = OCI::GenerativeAi::Util.respond_to?(:wait_on_work_request)

  return operation_result if wait_for_states.empty? && !use_util

  lowered_wait_for_states = wait_for_states.map(&:downcase)
  wait_for_resource_id = operation_result.headers['opc-work-request-id']
  return operation_result if wait_for_resource_id.nil? || wait_for_resource_id.empty?

  begin
    if use_util
      waiter_result = OCI::GenerativeAi::Util.wait_on_work_request(
        @service_client,
        wait_for_resource_id,
        max_interval_seconds: waiter_opts.key?(:max_interval_seconds) ? waiter_opts[:max_interval_seconds] : 30,
        max_wait_seconds: waiter_opts.key?(:max_wait_seconds) ? waiter_opts[:max_wait_seconds] : 1200
      )
    else
      waiter_result = @service_client.get_work_request(wait_for_resource_id).wait_until(
        eval_proc: ->(response) { response.data.respond_to?(:status) && lowered_wait_for_states.include?(response.data.status.downcase) },
        max_interval_seconds: waiter_opts.key?(:max_interval_seconds) ? waiter_opts[:max_interval_seconds] : 30,
        max_wait_seconds: waiter_opts.key?(:max_wait_seconds) ? waiter_opts[:max_wait_seconds] : 1200
      )
    end
    result_to_return = waiter_result

    return result_to_return
  rescue StandardError
    raise OCI::Errors::CompositeOperationError.new(partial_results: [operation_result])
  end
end

#create_model_and_wait_for_state(create_model_details, wait_for_states = [], base_operation_opts = {}, waiter_opts = {}) ⇒ OCI::Response

Calls OCI::GenerativeAi::GenerativeAiClient#create_model and then waits for the Models::WorkRequest to enter the given state(s).

Parameters:

  • create_model_details (OCI::GenerativeAi::Models::CreateModelDetails)

    Details for the new model.

  • wait_for_states (Array<String>) (defaults to: [])

    An array of states to wait on. These should be valid values for Models::WorkRequest#status

  • base_operation_opts (Hash) (defaults to: {})

    Any optional arguments accepted by OCI::GenerativeAi::GenerativeAiClient#create_model

  • waiter_opts (Hash) (defaults to: {})

    Optional arguments for the waiter. Keys should be symbols, and the following keys are supported: * max_interval_seconds: The maximum interval between queries, in seconds. * max_wait_seconds The maximum time to wait, in seconds

Returns:



141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
# File 'lib/oci/generative_ai/generative_ai_client_composite_operations.rb', line 141

def create_model_and_wait_for_state(create_model_details, wait_for_states = [], base_operation_opts = {}, waiter_opts = {})
  operation_result = @service_client.create_model(create_model_details, base_operation_opts)
  use_util = OCI::GenerativeAi::Util.respond_to?(:wait_on_work_request)

  return operation_result if wait_for_states.empty? && !use_util

  lowered_wait_for_states = wait_for_states.map(&:downcase)
  wait_for_resource_id = operation_result.headers['opc-work-request-id']
  return operation_result if wait_for_resource_id.nil? || wait_for_resource_id.empty?

  begin
    if use_util
      waiter_result = OCI::GenerativeAi::Util.wait_on_work_request(
        @service_client,
        wait_for_resource_id,
        max_interval_seconds: waiter_opts.key?(:max_interval_seconds) ? waiter_opts[:max_interval_seconds] : 30,
        max_wait_seconds: waiter_opts.key?(:max_wait_seconds) ? waiter_opts[:max_wait_seconds] : 1200
      )
    else
      waiter_result = @service_client.get_work_request(wait_for_resource_id).wait_until(
        eval_proc: ->(response) { response.data.respond_to?(:status) && lowered_wait_for_states.include?(response.data.status.downcase) },
        max_interval_seconds: waiter_opts.key?(:max_interval_seconds) ? waiter_opts[:max_interval_seconds] : 30,
        max_wait_seconds: waiter_opts.key?(:max_wait_seconds) ? waiter_opts[:max_wait_seconds] : 1200
      )
    end
    result_to_return = waiter_result

    return result_to_return
  rescue StandardError
    raise OCI::Errors::CompositeOperationError.new(partial_results: [operation_result])
  end
end

#delete_dedicated_ai_cluster_and_wait_for_state(dedicated_ai_cluster_id, wait_for_states = [], base_operation_opts = {}, waiter_opts = {}) ⇒ OCI::Response

Calls OCI::GenerativeAi::GenerativeAiClient#delete_dedicated_ai_cluster and then waits for the Models::WorkRequest to enter the given state(s).

Parameters:

  • dedicated_ai_cluster_id (String)

    The OCID of the dedicated AI cluster.

  • wait_for_states (Array<String>) (defaults to: [])

    An array of states to wait on. These should be valid values for Models::WorkRequest#status

  • base_operation_opts (Hash) (defaults to: {})
  • waiter_opts (Hash) (defaults to: {})

    Optional arguments for the waiter. Keys should be symbols, and the following keys are supported: * max_interval_seconds: The maximum interval between queries, in seconds. * max_wait_seconds The maximum time to wait, in seconds

Returns:



191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
# File 'lib/oci/generative_ai/generative_ai_client_composite_operations.rb', line 191

def delete_dedicated_ai_cluster_and_wait_for_state(dedicated_ai_cluster_id, wait_for_states = [], base_operation_opts = {}, waiter_opts = {})
  operation_result = @service_client.delete_dedicated_ai_cluster(dedicated_ai_cluster_id, base_operation_opts)
  use_util = OCI::GenerativeAi::Util.respond_to?(:wait_on_work_request)

  return operation_result if wait_for_states.empty? && !use_util

  lowered_wait_for_states = wait_for_states.map(&:downcase)
  wait_for_resource_id = operation_result.headers['opc-work-request-id']
  return operation_result if wait_for_resource_id.nil? || wait_for_resource_id.empty?

  begin
    if use_util
      waiter_result = OCI::GenerativeAi::Util.wait_on_work_request(
        @service_client,
        wait_for_resource_id,
        max_interval_seconds: waiter_opts.key?(:max_interval_seconds) ? waiter_opts[:max_interval_seconds] : 30,
        max_wait_seconds: waiter_opts.key?(:max_wait_seconds) ? waiter_opts[:max_wait_seconds] : 1200
      )
    else
      waiter_result = @service_client.get_work_request(wait_for_resource_id).wait_until(
        eval_proc: ->(response) { response.data.respond_to?(:status) && lowered_wait_for_states.include?(response.data.status.downcase) },
        max_interval_seconds: waiter_opts.key?(:max_interval_seconds) ? waiter_opts[:max_interval_seconds] : 30,
        max_wait_seconds: waiter_opts.key?(:max_wait_seconds) ? waiter_opts[:max_wait_seconds] : 1200
      )
    end
    result_to_return = waiter_result

    return result_to_return
  rescue StandardError
    raise OCI::Errors::CompositeOperationError.new(partial_results: [operation_result])
  end
end

#delete_endpoint_and_wait_for_state(endpoint_id, wait_for_states = [], base_operation_opts = {}, waiter_opts = {}) ⇒ OCI::Response

Calls OCI::GenerativeAi::GenerativeAiClient#delete_endpoint and then waits for the Models::WorkRequest to enter the given state(s).

Parameters:

  • endpoint_id (String)

    The OCID of the endpoint.

  • wait_for_states (Array<String>) (defaults to: [])

    An array of states to wait on. These should be valid values for Models::WorkRequest#status

  • base_operation_opts (Hash) (defaults to: {})

    Any optional arguments accepted by OCI::GenerativeAi::GenerativeAiClient#delete_endpoint

  • waiter_opts (Hash) (defaults to: {})

    Optional arguments for the waiter. Keys should be symbols, and the following keys are supported: * max_interval_seconds: The maximum interval between queries, in seconds. * max_wait_seconds The maximum time to wait, in seconds

Returns:



241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
# File 'lib/oci/generative_ai/generative_ai_client_composite_operations.rb', line 241

def delete_endpoint_and_wait_for_state(endpoint_id, wait_for_states = [], base_operation_opts = {}, waiter_opts = {})
  operation_result = @service_client.delete_endpoint(endpoint_id, base_operation_opts)
  use_util = OCI::GenerativeAi::Util.respond_to?(:wait_on_work_request)

  return operation_result if wait_for_states.empty? && !use_util

  lowered_wait_for_states = wait_for_states.map(&:downcase)
  wait_for_resource_id = operation_result.headers['opc-work-request-id']
  return operation_result if wait_for_resource_id.nil? || wait_for_resource_id.empty?

  begin
    if use_util
      waiter_result = OCI::GenerativeAi::Util.wait_on_work_request(
        @service_client,
        wait_for_resource_id,
        max_interval_seconds: waiter_opts.key?(:max_interval_seconds) ? waiter_opts[:max_interval_seconds] : 30,
        max_wait_seconds: waiter_opts.key?(:max_wait_seconds) ? waiter_opts[:max_wait_seconds] : 1200
      )
    else
      waiter_result = @service_client.get_work_request(wait_for_resource_id).wait_until(
        eval_proc: ->(response) { response.data.respond_to?(:status) && lowered_wait_for_states.include?(response.data.status.downcase) },
        max_interval_seconds: waiter_opts.key?(:max_interval_seconds) ? waiter_opts[:max_interval_seconds] : 30,
        max_wait_seconds: waiter_opts.key?(:max_wait_seconds) ? waiter_opts[:max_wait_seconds] : 1200
      )
    end
    result_to_return = waiter_result

    return result_to_return
  rescue StandardError
    raise OCI::Errors::CompositeOperationError.new(partial_results: [operation_result])
  end
end

#delete_model_and_wait_for_state(model_id, wait_for_states = [], base_operation_opts = {}, waiter_opts = {}) ⇒ OCI::Response

Calls OCI::GenerativeAi::GenerativeAiClient#delete_model and then waits for the Models::WorkRequest to enter the given state(s).

Parameters:

  • model_id (String)

    The model OCID

  • wait_for_states (Array<String>) (defaults to: [])

    An array of states to wait on. These should be valid values for Models::WorkRequest#status

  • base_operation_opts (Hash) (defaults to: {})

    Any optional arguments accepted by OCI::GenerativeAi::GenerativeAiClient#delete_model

  • waiter_opts (Hash) (defaults to: {})

    Optional arguments for the waiter. Keys should be symbols, and the following keys are supported: * max_interval_seconds: The maximum interval between queries, in seconds. * max_wait_seconds The maximum time to wait, in seconds

Returns:



291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
# File 'lib/oci/generative_ai/generative_ai_client_composite_operations.rb', line 291

def delete_model_and_wait_for_state(model_id, wait_for_states = [], base_operation_opts = {}, waiter_opts = {})
  operation_result = @service_client.delete_model(model_id, base_operation_opts)
  use_util = OCI::GenerativeAi::Util.respond_to?(:wait_on_work_request)

  return operation_result if wait_for_states.empty? && !use_util

  lowered_wait_for_states = wait_for_states.map(&:downcase)
  wait_for_resource_id = operation_result.headers['opc-work-request-id']
  return operation_result if wait_for_resource_id.nil? || wait_for_resource_id.empty?

  begin
    if use_util
      waiter_result = OCI::GenerativeAi::Util.wait_on_work_request(
        @service_client,
        wait_for_resource_id,
        max_interval_seconds: waiter_opts.key?(:max_interval_seconds) ? waiter_opts[:max_interval_seconds] : 30,
        max_wait_seconds: waiter_opts.key?(:max_wait_seconds) ? waiter_opts[:max_wait_seconds] : 1200
      )
    else
      waiter_result = @service_client.get_work_request(wait_for_resource_id).wait_until(
        eval_proc: ->(response) { response.data.respond_to?(:status) && lowered_wait_for_states.include?(response.data.status.downcase) },
        max_interval_seconds: waiter_opts.key?(:max_interval_seconds) ? waiter_opts[:max_interval_seconds] : 30,
        max_wait_seconds: waiter_opts.key?(:max_wait_seconds) ? waiter_opts[:max_wait_seconds] : 1200
      )
    end
    result_to_return = waiter_result

    return result_to_return
  rescue StandardError
    raise OCI::Errors::CompositeOperationError.new(partial_results: [operation_result])
  end
end

#update_dedicated_ai_cluster_and_wait_for_state(dedicated_ai_cluster_id, update_dedicated_ai_cluster_details, wait_for_states = [], base_operation_opts = {}, waiter_opts = {}) ⇒ OCI::Response

Calls OCI::GenerativeAi::GenerativeAiClient#update_dedicated_ai_cluster and then waits for the Models::WorkRequest to enter the given state(s).

Parameters:

  • dedicated_ai_cluster_id (String)

    The OCID of the dedicated AI cluster.

  • update_dedicated_ai_cluster_details (OCI::GenerativeAi::Models::UpdateDedicatedAiClusterDetails)

    The information to be updated.

  • wait_for_states (Array<String>) (defaults to: [])

    An array of states to wait on. These should be valid values for Models::WorkRequest#status

  • base_operation_opts (Hash) (defaults to: {})
  • waiter_opts (Hash) (defaults to: {})

    Optional arguments for the waiter. Keys should be symbols, and the following keys are supported: * max_interval_seconds: The maximum interval between queries, in seconds. * max_wait_seconds The maximum time to wait, in seconds

Returns:



342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
# File 'lib/oci/generative_ai/generative_ai_client_composite_operations.rb', line 342

def update_dedicated_ai_cluster_and_wait_for_state(dedicated_ai_cluster_id, update_dedicated_ai_cluster_details, wait_for_states = [], base_operation_opts = {}, waiter_opts = {})
  operation_result = @service_client.update_dedicated_ai_cluster(dedicated_ai_cluster_id, update_dedicated_ai_cluster_details, base_operation_opts)
  use_util = OCI::GenerativeAi::Util.respond_to?(:wait_on_work_request)

  return operation_result if wait_for_states.empty? && !use_util

  lowered_wait_for_states = wait_for_states.map(&:downcase)
  wait_for_resource_id = operation_result.headers['opc-work-request-id']
  return operation_result if wait_for_resource_id.nil? || wait_for_resource_id.empty?

  begin
    if use_util
      waiter_result = OCI::GenerativeAi::Util.wait_on_work_request(
        @service_client,
        wait_for_resource_id,
        max_interval_seconds: waiter_opts.key?(:max_interval_seconds) ? waiter_opts[:max_interval_seconds] : 30,
        max_wait_seconds: waiter_opts.key?(:max_wait_seconds) ? waiter_opts[:max_wait_seconds] : 1200
      )
    else
      waiter_result = @service_client.get_work_request(wait_for_resource_id).wait_until(
        eval_proc: ->(response) { response.data.respond_to?(:status) && lowered_wait_for_states.include?(response.data.status.downcase) },
        max_interval_seconds: waiter_opts.key?(:max_interval_seconds) ? waiter_opts[:max_interval_seconds] : 30,
        max_wait_seconds: waiter_opts.key?(:max_wait_seconds) ? waiter_opts[:max_wait_seconds] : 1200
      )
    end
    result_to_return = waiter_result

    return result_to_return
  rescue StandardError
    raise OCI::Errors::CompositeOperationError.new(partial_results: [operation_result])
  end
end

#update_endpoint_and_wait_for_state(endpoint_id, update_endpoint_details, wait_for_states = [], base_operation_opts = {}, waiter_opts = {}) ⇒ OCI::Response

Calls OCI::GenerativeAi::GenerativeAiClient#update_endpoint and then waits for the Models::WorkRequest to enter the given state(s).

Parameters:

  • endpoint_id (String)

    The OCID of the endpoint.

  • update_endpoint_details (OCI::GenerativeAi::Models::UpdateEndpointDetails)

    The information to be updated.

  • wait_for_states (Array<String>) (defaults to: [])

    An array of states to wait on. These should be valid values for Models::WorkRequest#status

  • base_operation_opts (Hash) (defaults to: {})

    Any optional arguments accepted by OCI::GenerativeAi::GenerativeAiClient#update_endpoint

  • waiter_opts (Hash) (defaults to: {})

    Optional arguments for the waiter. Keys should be symbols, and the following keys are supported: * max_interval_seconds: The maximum interval between queries, in seconds. * max_wait_seconds The maximum time to wait, in seconds

Returns:



393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
# File 'lib/oci/generative_ai/generative_ai_client_composite_operations.rb', line 393

def update_endpoint_and_wait_for_state(endpoint_id, update_endpoint_details, wait_for_states = [], base_operation_opts = {}, waiter_opts = {})
  operation_result = @service_client.update_endpoint(endpoint_id, update_endpoint_details, base_operation_opts)
  use_util = OCI::GenerativeAi::Util.respond_to?(:wait_on_work_request)

  return operation_result if wait_for_states.empty? && !use_util

  lowered_wait_for_states = wait_for_states.map(&:downcase)
  wait_for_resource_id = operation_result.headers['opc-work-request-id']
  return operation_result if wait_for_resource_id.nil? || wait_for_resource_id.empty?

  begin
    if use_util
      waiter_result = OCI::GenerativeAi::Util.wait_on_work_request(
        @service_client,
        wait_for_resource_id,
        max_interval_seconds: waiter_opts.key?(:max_interval_seconds) ? waiter_opts[:max_interval_seconds] : 30,
        max_wait_seconds: waiter_opts.key?(:max_wait_seconds) ? waiter_opts[:max_wait_seconds] : 1200
      )
    else
      waiter_result = @service_client.get_work_request(wait_for_resource_id).wait_until(
        eval_proc: ->(response) { response.data.respond_to?(:status) && lowered_wait_for_states.include?(response.data.status.downcase) },
        max_interval_seconds: waiter_opts.key?(:max_interval_seconds) ? waiter_opts[:max_interval_seconds] : 30,
        max_wait_seconds: waiter_opts.key?(:max_wait_seconds) ? waiter_opts[:max_wait_seconds] : 1200
      )
    end
    result_to_return = waiter_result

    return result_to_return
  rescue StandardError
    raise OCI::Errors::CompositeOperationError.new(partial_results: [operation_result])
  end
end

#update_model_and_wait_for_state(model_id, update_model_details, wait_for_states = [], base_operation_opts = {}, waiter_opts = {}) ⇒ OCI::Response

Calls OCI::GenerativeAi::GenerativeAiClient#update_model and then waits for the Models::Model acted upon to enter the given state(s).

Parameters:

  • model_id (String)

    The model OCID

  • update_model_details (OCI::GenerativeAi::Models::UpdateModelDetails)

    The model information to be updated.

  • wait_for_states (Array<String>) (defaults to: [])

    An array of states to wait on. These should be valid values for Models::Model#lifecycle_state

  • base_operation_opts (Hash) (defaults to: {})

    Any optional arguments accepted by OCI::GenerativeAi::GenerativeAiClient#update_model

  • waiter_opts (Hash) (defaults to: {})

    Optional arguments for the waiter. Keys should be symbols, and the following keys are supported: * max_interval_seconds: The maximum interval between queries, in seconds. * max_wait_seconds The maximum time to wait, in seconds

Returns:



444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
# File 'lib/oci/generative_ai/generative_ai_client_composite_operations.rb', line 444

def update_model_and_wait_for_state(model_id, update_model_details, wait_for_states = [], base_operation_opts = {}, waiter_opts = {})
  operation_result = @service_client.update_model(model_id, update_model_details, base_operation_opts)

  return operation_result if wait_for_states.empty?

  lowered_wait_for_states = wait_for_states.map(&:downcase)
  wait_for_resource_id = operation_result.data.id

  begin
    waiter_result = @service_client.get_model(wait_for_resource_id).wait_until(
      eval_proc: ->(response) { response.data.respond_to?(:lifecycle_state) && lowered_wait_for_states.include?(response.data.lifecycle_state.downcase) },
      max_interval_seconds: waiter_opts.key?(:max_interval_seconds) ? waiter_opts[:max_interval_seconds] : 30,
      max_wait_seconds: waiter_opts.key?(:max_wait_seconds) ? waiter_opts[:max_wait_seconds] : 1200
    )
    result_to_return = waiter_result

    return result_to_return
  rescue StandardError
    raise OCI::Errors::CompositeOperationError.new(partial_results: [operation_result])
  end
end