Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi
google-native.notebooks/v1.getSchedule
Explore with Pulumi AI
Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi
Gets details of schedule
Using getSchedule
Two invocation forms are available. The direct form accepts plain arguments and either blocks until the result value is available, or returns a Promise-wrapped result. The output form accepts Input-wrapped arguments and returns an Output-wrapped result.
function getSchedule(args: GetScheduleArgs, opts?: InvokeOptions): Promise<GetScheduleResult>
function getScheduleOutput(args: GetScheduleOutputArgs, opts?: InvokeOptions): Output<GetScheduleResult>
def get_schedule(location: Optional[str] = None,
project: Optional[str] = None,
schedule_id: Optional[str] = None,
opts: Optional[InvokeOptions] = None) -> GetScheduleResult
def get_schedule_output(location: Optional[pulumi.Input[str]] = None,
project: Optional[pulumi.Input[str]] = None,
schedule_id: Optional[pulumi.Input[str]] = None,
opts: Optional[InvokeOptions] = None) -> Output[GetScheduleResult]
func LookupSchedule(ctx *Context, args *LookupScheduleArgs, opts ...InvokeOption) (*LookupScheduleResult, error)
func LookupScheduleOutput(ctx *Context, args *LookupScheduleOutputArgs, opts ...InvokeOption) LookupScheduleResultOutput
> Note: This function is named LookupSchedule
in the Go SDK.
public static class GetSchedule
{
public static Task<GetScheduleResult> InvokeAsync(GetScheduleArgs args, InvokeOptions? opts = null)
public static Output<GetScheduleResult> Invoke(GetScheduleInvokeArgs args, InvokeOptions? opts = null)
}
public static CompletableFuture<GetScheduleResult> getSchedule(GetScheduleArgs args, InvokeOptions options)
public static Output<GetScheduleResult> getSchedule(GetScheduleArgs args, InvokeOptions options)
fn::invoke:
function: google-native:notebooks/v1:getSchedule
arguments:
# arguments dictionary
The following arguments are supported:
- Location
This property is required. string - Schedule
Id This property is required. string - Project string
- Location
This property is required. string - Schedule
Id This property is required. string - Project string
- location
This property is required. String - schedule
Id This property is required. String - project String
- location
This property is required. string - schedule
Id This property is required. string - project string
- location
This property is required. str - schedule_
id This property is required. str - project str
- location
This property is required. String - schedule
Id This property is required. String - project String
getSchedule Result
The following output properties are available:
- Create
Time string - Time the schedule was created.
- Cron
Schedule string - Cron-tab formatted schedule by which the job will execute. Format: minute, hour, day of month, month, day of week, e.g.
0 0 * * WED
= every Wednesday More examples: https://crontab.guru/examples.html - Description string
- A brief description of this environment.
- Display
Name string - Display name used for UI purposes. Name can only contain alphanumeric characters, hyphens
-
, and underscores_
. - Execution
Template Pulumi.Google Native. Notebooks. V1. Outputs. Execution Template Response - Notebook Execution Template corresponding to this schedule.
- Name string
- The name of this schedule. Format:
projects/{project_id}/locations/{location}/schedules/{schedule_id}
- Recent
Executions List<Pulumi.Google Native. Notebooks. V1. Outputs. Execution Response> - The most recent execution names triggered from this schedule and their corresponding states.
- State string
- Time
Zone string - Timezone on which the cron_schedule. The value of this field must be a time zone name from the tz database. TZ Database: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones Note that some time zones include a provision for daylight savings time. The rules for daylight saving time are determined by the chosen tz. For UTC use the string "utc". If a time zone is not specified, the default will be in UTC (also known as GMT).
- Update
Time string - Time the schedule was last updated.
- Create
Time string - Time the schedule was created.
- Cron
Schedule string - Cron-tab formatted schedule by which the job will execute. Format: minute, hour, day of month, month, day of week, e.g.
0 0 * * WED
= every Wednesday More examples: https://crontab.guru/examples.html - Description string
- A brief description of this environment.
- Display
Name string - Display name used for UI purposes. Name can only contain alphanumeric characters, hyphens
-
, and underscores_
. - Execution
Template ExecutionTemplate Response - Notebook Execution Template corresponding to this schedule.
- Name string
- The name of this schedule. Format:
projects/{project_id}/locations/{location}/schedules/{schedule_id}
- Recent
Executions []ExecutionResponse - The most recent execution names triggered from this schedule and their corresponding states.
- State string
- Time
Zone string - Timezone on which the cron_schedule. The value of this field must be a time zone name from the tz database. TZ Database: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones Note that some time zones include a provision for daylight savings time. The rules for daylight saving time are determined by the chosen tz. For UTC use the string "utc". If a time zone is not specified, the default will be in UTC (also known as GMT).
- Update
Time string - Time the schedule was last updated.
- create
Time String - Time the schedule was created.
- cron
Schedule String - Cron-tab formatted schedule by which the job will execute. Format: minute, hour, day of month, month, day of week, e.g.
0 0 * * WED
= every Wednesday More examples: https://crontab.guru/examples.html - description String
- A brief description of this environment.
- display
Name String - Display name used for UI purposes. Name can only contain alphanumeric characters, hyphens
-
, and underscores_
. - execution
Template ExecutionTemplate Response - Notebook Execution Template corresponding to this schedule.
- name String
- The name of this schedule. Format:
projects/{project_id}/locations/{location}/schedules/{schedule_id}
- recent
Executions List<ExecutionResponse> - The most recent execution names triggered from this schedule and their corresponding states.
- state String
- time
Zone String - Timezone on which the cron_schedule. The value of this field must be a time zone name from the tz database. TZ Database: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones Note that some time zones include a provision for daylight savings time. The rules for daylight saving time are determined by the chosen tz. For UTC use the string "utc". If a time zone is not specified, the default will be in UTC (also known as GMT).
- update
Time String - Time the schedule was last updated.
- create
Time string - Time the schedule was created.
- cron
Schedule string - Cron-tab formatted schedule by which the job will execute. Format: minute, hour, day of month, month, day of week, e.g.
0 0 * * WED
= every Wednesday More examples: https://crontab.guru/examples.html - description string
- A brief description of this environment.
- display
Name string - Display name used for UI purposes. Name can only contain alphanumeric characters, hyphens
-
, and underscores_
. - execution
Template ExecutionTemplate Response - Notebook Execution Template corresponding to this schedule.
- name string
- The name of this schedule. Format:
projects/{project_id}/locations/{location}/schedules/{schedule_id}
- recent
Executions ExecutionResponse[] - The most recent execution names triggered from this schedule and their corresponding states.
- state string
- time
Zone string - Timezone on which the cron_schedule. The value of this field must be a time zone name from the tz database. TZ Database: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones Note that some time zones include a provision for daylight savings time. The rules for daylight saving time are determined by the chosen tz. For UTC use the string "utc". If a time zone is not specified, the default will be in UTC (also known as GMT).
- update
Time string - Time the schedule was last updated.
- create_
time str - Time the schedule was created.
- cron_
schedule str - Cron-tab formatted schedule by which the job will execute. Format: minute, hour, day of month, month, day of week, e.g.
0 0 * * WED
= every Wednesday More examples: https://crontab.guru/examples.html - description str
- A brief description of this environment.
- display_
name str - Display name used for UI purposes. Name can only contain alphanumeric characters, hyphens
-
, and underscores_
. - execution_
template ExecutionTemplate Response - Notebook Execution Template corresponding to this schedule.
- name str
- The name of this schedule. Format:
projects/{project_id}/locations/{location}/schedules/{schedule_id}
- recent_
executions Sequence[ExecutionResponse] - The most recent execution names triggered from this schedule and their corresponding states.
- state str
- time_
zone str - Timezone on which the cron_schedule. The value of this field must be a time zone name from the tz database. TZ Database: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones Note that some time zones include a provision for daylight savings time. The rules for daylight saving time are determined by the chosen tz. For UTC use the string "utc". If a time zone is not specified, the default will be in UTC (also known as GMT).
- update_
time str - Time the schedule was last updated.
- create
Time String - Time the schedule was created.
- cron
Schedule String - Cron-tab formatted schedule by which the job will execute. Format: minute, hour, day of month, month, day of week, e.g.
0 0 * * WED
= every Wednesday More examples: https://crontab.guru/examples.html - description String
- A brief description of this environment.
- display
Name String - Display name used for UI purposes. Name can only contain alphanumeric characters, hyphens
-
, and underscores_
. - execution
Template Property Map - Notebook Execution Template corresponding to this schedule.
- name String
- The name of this schedule. Format:
projects/{project_id}/locations/{location}/schedules/{schedule_id}
- recent
Executions List<Property Map> - The most recent execution names triggered from this schedule and their corresponding states.
- state String
- time
Zone String - Timezone on which the cron_schedule. The value of this field must be a time zone name from the tz database. TZ Database: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones Note that some time zones include a provision for daylight savings time. The rules for daylight saving time are determined by the chosen tz. For UTC use the string "utc". If a time zone is not specified, the default will be in UTC (also known as GMT).
- update
Time String - Time the schedule was last updated.
Supporting Types
DataprocParametersResponse
- Cluster
This property is required. string - URI for cluster used to run Dataproc execution. Format:
projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
- Cluster
This property is required. string - URI for cluster used to run Dataproc execution. Format:
projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
- cluster
This property is required. String - URI for cluster used to run Dataproc execution. Format:
projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
- cluster
This property is required. string - URI for cluster used to run Dataproc execution. Format:
projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
- cluster
This property is required. str - URI for cluster used to run Dataproc execution. Format:
projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
- cluster
This property is required. String - URI for cluster used to run Dataproc execution. Format:
projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
ExecutionResponse
- Create
Time This property is required. string - Time the Execution was instantiated.
- Description
This property is required. string - A brief description of this execution.
- Display
Name This property is required. string - Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.
- Execution
Template This property is required. Pulumi.Google Native. Notebooks. V1. Inputs. Execution Template Response - execute metadata including name, hardware spec, region, labels, etc.
- Job
Uri This property is required. string - The URI of the external job used to execute the notebook.
- Name
This property is required. string - The resource name of the execute. Format:
projects/{project_id}/locations/{location}/executions/{execution_id}
- Output
Notebook File This property is required. string - Output notebook file generated by this execution
- State
This property is required. string - State of the underlying AI Platform job.
- Update
Time This property is required. string - Time the Execution was last updated.
- Create
Time This property is required. string - Time the Execution was instantiated.
- Description
This property is required. string - A brief description of this execution.
- Display
Name This property is required. string - Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.
- Execution
Template This property is required. ExecutionTemplate Response - execute metadata including name, hardware spec, region, labels, etc.
- Job
Uri This property is required. string - The URI of the external job used to execute the notebook.
- Name
This property is required. string - The resource name of the execute. Format:
projects/{project_id}/locations/{location}/executions/{execution_id}
- Output
Notebook File This property is required. string - Output notebook file generated by this execution
- State
This property is required. string - State of the underlying AI Platform job.
- Update
Time This property is required. string - Time the Execution was last updated.
- create
Time This property is required. String - Time the Execution was instantiated.
- description
This property is required. String - A brief description of this execution.
- display
Name This property is required. String - Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.
- execution
Template This property is required. ExecutionTemplate Response - execute metadata including name, hardware spec, region, labels, etc.
- job
Uri This property is required. String - The URI of the external job used to execute the notebook.
- name
This property is required. String - The resource name of the execute. Format:
projects/{project_id}/locations/{location}/executions/{execution_id}
- output
Notebook File This property is required. String - Output notebook file generated by this execution
- state
This property is required. String - State of the underlying AI Platform job.
- update
Time This property is required. String - Time the Execution was last updated.
- create
Time This property is required. string - Time the Execution was instantiated.
- description
This property is required. string - A brief description of this execution.
- display
Name This property is required. string - Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.
- execution
Template This property is required. ExecutionTemplate Response - execute metadata including name, hardware spec, region, labels, etc.
- job
Uri This property is required. string - The URI of the external job used to execute the notebook.
- name
This property is required. string - The resource name of the execute. Format:
projects/{project_id}/locations/{location}/executions/{execution_id}
- output
Notebook File This property is required. string - Output notebook file generated by this execution
- state
This property is required. string - State of the underlying AI Platform job.
- update
Time This property is required. string - Time the Execution was last updated.
- create_
time This property is required. str - Time the Execution was instantiated.
- description
This property is required. str - A brief description of this execution.
- display_
name This property is required. str - Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.
- execution_
template This property is required. ExecutionTemplate Response - execute metadata including name, hardware spec, region, labels, etc.
- job_
uri This property is required. str - The URI of the external job used to execute the notebook.
- name
This property is required. str - The resource name of the execute. Format:
projects/{project_id}/locations/{location}/executions/{execution_id}
- output_
notebook_ file This property is required. str - Output notebook file generated by this execution
- state
This property is required. str - State of the underlying AI Platform job.
- update_
time This property is required. str - Time the Execution was last updated.
- create
Time This property is required. String - Time the Execution was instantiated.
- description
This property is required. String - A brief description of this execution.
- display
Name This property is required. String - Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.
- execution
Template This property is required. Property Map - execute metadata including name, hardware spec, region, labels, etc.
- job
Uri This property is required. String - The URI of the external job used to execute the notebook.
- name
This property is required. String - The resource name of the execute. Format:
projects/{project_id}/locations/{location}/executions/{execution_id}
- output
Notebook File This property is required. String - Output notebook file generated by this execution
- state
This property is required. String - State of the underlying AI Platform job.
- update
Time This property is required. String - Time the Execution was last updated.
ExecutionTemplateResponse
- Accelerator
Config This property is required. Pulumi.Google Native. Notebooks. V1. Inputs. Scheduler Accelerator Config Response - Configuration (count and accelerator type) for hardware running notebook execution.
- Container
Image Uri This property is required. string - Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- Dataproc
Parameters This property is required. Pulumi.Google Native. Notebooks. V1. Inputs. Dataproc Parameters Response - Parameters used in Dataproc JobType executions.
- Input
Notebook File This property is required. string - Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format:
gs://{bucket_name}/{folder}/{notebook_file_name}
Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- Job
Type This property is required. string - The type of Job to be used on this execution.
- Kernel
Spec This property is required. string - Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- Labels
This property is required. Dictionary<string, string> - Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- Master
Type This property is required. string - Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when
scaleTier
is set toCUSTOM
. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4
-n1-standard-8
-n1-standard-16
-n1-standard-32
-n1-standard-64
-n1-standard-96
-n1-highmem-2
-n1-highmem-4
-n1-highmem-8
-n1-highmem-16
-n1-highmem-32
-n1-highmem-64
-n1-highmem-96
-n1-highcpu-16
-n1-highcpu-32
-n1-highcpu-64
-n1-highcpu-96
Alternatively, you can use the following legacy machine types: -standard
-large_model
-complex_model_s
-complex_model_m
-complex_model_l
-standard_gpu
-complex_model_m_gpu
-complex_model_l_gpu
-standard_p100
-complex_model_m_p100
-standard_v100
-large_model_v100
-complex_model_m_v100
-complex_model_l_v100
Finally, if you want to use a TPU for training, specifycloud_tpu
in this field. Learn more about the special configuration options for training with TPU. - Output
Notebook Folder This property is required. string - Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format:
gs://{bucket_name}/{folder}
Ex:gs://notebook_user/scheduled_notebooks
- Parameters
This property is required. string - Parameters used within the 'input_notebook_file' notebook.
- Params
Yaml File This property is required. string - Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex:
gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- Scale
Tier This property is required. string - Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- Service
Account This property is required. string - The email address of a service account to use when running the execution. You must have the
iam.serviceAccounts.actAs
permission for the specified service account. - Tensorboard
This property is required. string - The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format:
projects/{project}/locations/{location}/tensorboards/{tensorboard}
- Vertex
Ai Parameters This property is required. Pulumi.Google Native. Notebooks. V1. Inputs. Vertex AIParameters Response - Parameters used in Vertex AI JobType executions.
- Accelerator
Config This property is required. SchedulerAccelerator Config Response - Configuration (count and accelerator type) for hardware running notebook execution.
- Container
Image Uri This property is required. string - Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- Dataproc
Parameters This property is required. DataprocParameters Response - Parameters used in Dataproc JobType executions.
- Input
Notebook File This property is required. string - Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format:
gs://{bucket_name}/{folder}/{notebook_file_name}
Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- Job
Type This property is required. string - The type of Job to be used on this execution.
- Kernel
Spec This property is required. string - Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- Labels
This property is required. map[string]string - Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- Master
Type This property is required. string - Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when
scaleTier
is set toCUSTOM
. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4
-n1-standard-8
-n1-standard-16
-n1-standard-32
-n1-standard-64
-n1-standard-96
-n1-highmem-2
-n1-highmem-4
-n1-highmem-8
-n1-highmem-16
-n1-highmem-32
-n1-highmem-64
-n1-highmem-96
-n1-highcpu-16
-n1-highcpu-32
-n1-highcpu-64
-n1-highcpu-96
Alternatively, you can use the following legacy machine types: -standard
-large_model
-complex_model_s
-complex_model_m
-complex_model_l
-standard_gpu
-complex_model_m_gpu
-complex_model_l_gpu
-standard_p100
-complex_model_m_p100
-standard_v100
-large_model_v100
-complex_model_m_v100
-complex_model_l_v100
Finally, if you want to use a TPU for training, specifycloud_tpu
in this field. Learn more about the special configuration options for training with TPU. - Output
Notebook Folder This property is required. string - Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format:
gs://{bucket_name}/{folder}
Ex:gs://notebook_user/scheduled_notebooks
- Parameters
This property is required. string - Parameters used within the 'input_notebook_file' notebook.
- Params
Yaml File This property is required. string - Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex:
gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- Scale
Tier This property is required. string - Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- Service
Account This property is required. string - The email address of a service account to use when running the execution. You must have the
iam.serviceAccounts.actAs
permission for the specified service account. - Tensorboard
This property is required. string - The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format:
projects/{project}/locations/{location}/tensorboards/{tensorboard}
- Vertex
Ai Parameters This property is required. VertexAIParameters Response - Parameters used in Vertex AI JobType executions.
- accelerator
Config This property is required. SchedulerAccelerator Config Response - Configuration (count and accelerator type) for hardware running notebook execution.
- container
Image Uri This property is required. String - Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- dataproc
Parameters This property is required. DataprocParameters Response - Parameters used in Dataproc JobType executions.
- input
Notebook File This property is required. String - Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format:
gs://{bucket_name}/{folder}/{notebook_file_name}
Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- job
Type This property is required. String - The type of Job to be used on this execution.
- kernel
Spec This property is required. String - Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- labels
This property is required. Map<String,String> - Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- master
Type This property is required. String - Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when
scaleTier
is set toCUSTOM
. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4
-n1-standard-8
-n1-standard-16
-n1-standard-32
-n1-standard-64
-n1-standard-96
-n1-highmem-2
-n1-highmem-4
-n1-highmem-8
-n1-highmem-16
-n1-highmem-32
-n1-highmem-64
-n1-highmem-96
-n1-highcpu-16
-n1-highcpu-32
-n1-highcpu-64
-n1-highcpu-96
Alternatively, you can use the following legacy machine types: -standard
-large_model
-complex_model_s
-complex_model_m
-complex_model_l
-standard_gpu
-complex_model_m_gpu
-complex_model_l_gpu
-standard_p100
-complex_model_m_p100
-standard_v100
-large_model_v100
-complex_model_m_v100
-complex_model_l_v100
Finally, if you want to use a TPU for training, specifycloud_tpu
in this field. Learn more about the special configuration options for training with TPU. - output
Notebook Folder This property is required. String - Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format:
gs://{bucket_name}/{folder}
Ex:gs://notebook_user/scheduled_notebooks
- parameters
This property is required. String - Parameters used within the 'input_notebook_file' notebook.
- params
Yaml File This property is required. String - Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex:
gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- scale
Tier This property is required. String - Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- service
Account This property is required. String - The email address of a service account to use when running the execution. You must have the
iam.serviceAccounts.actAs
permission for the specified service account. - tensorboard
This property is required. String - The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format:
projects/{project}/locations/{location}/tensorboards/{tensorboard}
- vertex
Ai Parameters This property is required. VertexAIParameters Response - Parameters used in Vertex AI JobType executions.
- accelerator
Config This property is required. SchedulerAccelerator Config Response - Configuration (count and accelerator type) for hardware running notebook execution.
- container
Image Uri This property is required. string - Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- dataproc
Parameters This property is required. DataprocParameters Response - Parameters used in Dataproc JobType executions.
- input
Notebook File This property is required. string - Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format:
gs://{bucket_name}/{folder}/{notebook_file_name}
Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- job
Type This property is required. string - The type of Job to be used on this execution.
- kernel
Spec This property is required. string - Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- labels
This property is required. {[key: string]: string} - Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- master
Type This property is required. string - Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when
scaleTier
is set toCUSTOM
. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4
-n1-standard-8
-n1-standard-16
-n1-standard-32
-n1-standard-64
-n1-standard-96
-n1-highmem-2
-n1-highmem-4
-n1-highmem-8
-n1-highmem-16
-n1-highmem-32
-n1-highmem-64
-n1-highmem-96
-n1-highcpu-16
-n1-highcpu-32
-n1-highcpu-64
-n1-highcpu-96
Alternatively, you can use the following legacy machine types: -standard
-large_model
-complex_model_s
-complex_model_m
-complex_model_l
-standard_gpu
-complex_model_m_gpu
-complex_model_l_gpu
-standard_p100
-complex_model_m_p100
-standard_v100
-large_model_v100
-complex_model_m_v100
-complex_model_l_v100
Finally, if you want to use a TPU for training, specifycloud_tpu
in this field. Learn more about the special configuration options for training with TPU. - output
Notebook Folder This property is required. string - Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format:
gs://{bucket_name}/{folder}
Ex:gs://notebook_user/scheduled_notebooks
- parameters
This property is required. string - Parameters used within the 'input_notebook_file' notebook.
- params
Yaml File This property is required. string - Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex:
gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- scale
Tier This property is required. string - Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- service
Account This property is required. string - The email address of a service account to use when running the execution. You must have the
iam.serviceAccounts.actAs
permission for the specified service account. - tensorboard
This property is required. string - The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format:
projects/{project}/locations/{location}/tensorboards/{tensorboard}
- vertex
Ai Parameters This property is required. VertexAIParameters Response - Parameters used in Vertex AI JobType executions.
- accelerator_
config This property is required. SchedulerAccelerator Config Response - Configuration (count and accelerator type) for hardware running notebook execution.
- container_
image_ uri This property is required. str - Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- dataproc_
parameters This property is required. DataprocParameters Response - Parameters used in Dataproc JobType executions.
- input_
notebook_ file This property is required. str - Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format:
gs://{bucket_name}/{folder}/{notebook_file_name}
Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- job_
type This property is required. str - The type of Job to be used on this execution.
- kernel_
spec This property is required. str - Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- labels
This property is required. Mapping[str, str] - Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- master_
type This property is required. str - Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when
scaleTier
is set toCUSTOM
. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4
-n1-standard-8
-n1-standard-16
-n1-standard-32
-n1-standard-64
-n1-standard-96
-n1-highmem-2
-n1-highmem-4
-n1-highmem-8
-n1-highmem-16
-n1-highmem-32
-n1-highmem-64
-n1-highmem-96
-n1-highcpu-16
-n1-highcpu-32
-n1-highcpu-64
-n1-highcpu-96
Alternatively, you can use the following legacy machine types: -standard
-large_model
-complex_model_s
-complex_model_m
-complex_model_l
-standard_gpu
-complex_model_m_gpu
-complex_model_l_gpu
-standard_p100
-complex_model_m_p100
-standard_v100
-large_model_v100
-complex_model_m_v100
-complex_model_l_v100
Finally, if you want to use a TPU for training, specifycloud_tpu
in this field. Learn more about the special configuration options for training with TPU. - output_
notebook_ folder This property is required. str - Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format:
gs://{bucket_name}/{folder}
Ex:gs://notebook_user/scheduled_notebooks
- parameters
This property is required. str - Parameters used within the 'input_notebook_file' notebook.
- params_
yaml_ file This property is required. str - Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex:
gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- scale_
tier This property is required. str - Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- service_
account This property is required. str - The email address of a service account to use when running the execution. You must have the
iam.serviceAccounts.actAs
permission for the specified service account. - tensorboard
This property is required. str - The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format:
projects/{project}/locations/{location}/tensorboards/{tensorboard}
- vertex_
ai_ parameters This property is required. VertexAIParameters Response - Parameters used in Vertex AI JobType executions.
- accelerator
Config This property is required. Property Map - Configuration (count and accelerator type) for hardware running notebook execution.
- container
Image Uri This property is required. String - Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- dataproc
Parameters This property is required. Property Map - Parameters used in Dataproc JobType executions.
- input
Notebook File This property is required. String - Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format:
gs://{bucket_name}/{folder}/{notebook_file_name}
Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- job
Type This property is required. String - The type of Job to be used on this execution.
- kernel
Spec This property is required. String - Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- labels
This property is required. Map<String> - Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- master
Type This property is required. String - Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when
scaleTier
is set toCUSTOM
. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4
-n1-standard-8
-n1-standard-16
-n1-standard-32
-n1-standard-64
-n1-standard-96
-n1-highmem-2
-n1-highmem-4
-n1-highmem-8
-n1-highmem-16
-n1-highmem-32
-n1-highmem-64
-n1-highmem-96
-n1-highcpu-16
-n1-highcpu-32
-n1-highcpu-64
-n1-highcpu-96
Alternatively, you can use the following legacy machine types: -standard
-large_model
-complex_model_s
-complex_model_m
-complex_model_l
-standard_gpu
-complex_model_m_gpu
-complex_model_l_gpu
-standard_p100
-complex_model_m_p100
-standard_v100
-large_model_v100
-complex_model_m_v100
-complex_model_l_v100
Finally, if you want to use a TPU for training, specifycloud_tpu
in this field. Learn more about the special configuration options for training with TPU. - output
Notebook Folder This property is required. String - Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format:
gs://{bucket_name}/{folder}
Ex:gs://notebook_user/scheduled_notebooks
- parameters
This property is required. String - Parameters used within the 'input_notebook_file' notebook.
- params
Yaml File This property is required. String - Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex:
gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- scale
Tier This property is required. String - Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- service
Account This property is required. String - The email address of a service account to use when running the execution. You must have the
iam.serviceAccounts.actAs
permission for the specified service account. - tensorboard
This property is required. String - The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format:
projects/{project}/locations/{location}/tensorboards/{tensorboard}
- vertex
Ai Parameters This property is required. Property Map - Parameters used in Vertex AI JobType executions.
SchedulerAcceleratorConfigResponse
- core_
count This property is required. str - Count of cores of this accelerator.
- type
This property is required. str - Type of this accelerator.
VertexAIParametersResponse
- Env
This property is required. Dictionary<string, string> - Environment variables. At most 100 environment variables can be specified and unique. Example:
GCP_BUCKET=gs://my-bucket/samples/
- Network
This property is required. string - The full name of the Compute Engine network to which the Job should be peered. For example,
projects/12345/global/networks/myVPC
. Format is of the formprojects/{project}/global/networks/{network}
. Where{project}
is a project number, as in12345
, and{network}
is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
- Env
This property is required. map[string]string - Environment variables. At most 100 environment variables can be specified and unique. Example:
GCP_BUCKET=gs://my-bucket/samples/
- Network
This property is required. string - The full name of the Compute Engine network to which the Job should be peered. For example,
projects/12345/global/networks/myVPC
. Format is of the formprojects/{project}/global/networks/{network}
. Where{project}
is a project number, as in12345
, and{network}
is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
- env
This property is required. Map<String,String> - Environment variables. At most 100 environment variables can be specified and unique. Example:
GCP_BUCKET=gs://my-bucket/samples/
- network
This property is required. String - The full name of the Compute Engine network to which the Job should be peered. For example,
projects/12345/global/networks/myVPC
. Format is of the formprojects/{project}/global/networks/{network}
. Where{project}
is a project number, as in12345
, and{network}
is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
- env
This property is required. {[key: string]: string} - Environment variables. At most 100 environment variables can be specified and unique. Example:
GCP_BUCKET=gs://my-bucket/samples/
- network
This property is required. string - The full name of the Compute Engine network to which the Job should be peered. For example,
projects/12345/global/networks/myVPC
. Format is of the formprojects/{project}/global/networks/{network}
. Where{project}
is a project number, as in12345
, and{network}
is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
- env
This property is required. Mapping[str, str] - Environment variables. At most 100 environment variables can be specified and unique. Example:
GCP_BUCKET=gs://my-bucket/samples/
- network
This property is required. str - The full name of the Compute Engine network to which the Job should be peered. For example,
projects/12345/global/networks/myVPC
. Format is of the formprojects/{project}/global/networks/{network}
. Where{project}
is a project number, as in12345
, and{network}
is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
- env
This property is required. Map<String> - Environment variables. At most 100 environment variables can be specified and unique. Example:
GCP_BUCKET=gs://my-bucket/samples/
- network
This property is required. String - The full name of the Compute Engine network to which the Job should be peered. For example,
projects/12345/global/networks/myVPC
. Format is of the formprojects/{project}/global/networks/{network}
. Where{project}
is a project number, as in12345
, and{network}
is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
Package Details
- Repository
- Google Cloud Native pulumi/pulumi-google-native
- License
- Apache-2.0
Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi