-
Notifications
You must be signed in to change notification settings - Fork 537
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[k8s] Add validation for pod_config #4206 #4466
base: master
Are you sure you want to change the base?
Conversation
Check pod_config when run 'sky check k8s' by using k8s api
e994181
to
12f1208
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @chesterli29! Left some questions. We may need to use an alternate approach since pod validation from k8s API server may be too strict.
sky/provision/kubernetes/utils.py
Outdated
kubernetes.core_api(context).create_namespaced_pod( | ||
namespace, | ||
body=pod_config, | ||
dry_run='All', | ||
field_validation='Strict', | ||
_request_timeout=kubernetes.API_TIMEOUT) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this approach work even if the pod_config is partially specified? E.g.,
kubernetes:
pod_config:
spec:
containers:
- env:
- name: MY_ENV_VAR
value: "my_value"
My hunch is k8s will reject this pod spec since it's not a complete pod spec, but it's a valid pod_config in our case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, the k8s will reject this pod spec.
if this pod_config is valid in this project. is there any definition about this config? for example: some filed is required or optional? or all the filed is optional here, but it must follow the k8s pod require only if it has been set ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
here is my solution about this, we can check the pod config by using k8s api after combine_pod_config_fields
and combine_metadata_fields
during launch (that is the early stage of launching.).
it's really hard and complex to follow and maintain the k8s pod json/yaml schema in this project.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
all the filed is optional here, but it must follow the k8s pod require only if it has been set ?
Yes, this is the definition of a valid pod_spec.
can check the pod config by using k8s api after combine_pod_config_fields and combine_metadata_fields during launch (that is the early stage of launching.)
Yes, that sounds reasonable as long as we can surface to the user where the error comes in the user's pod config.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have we considered having a simple local schema check, with the json schema fetched and flattened from something like https://github.com/instrumenta/kubernetes-json-schema/tree/master?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have we considered having a simple local schema check, with the json schema fetched and flattened from something like https://github.com/instrumenta/kubernetes-json-schema/tree/master?
Yeah, I took a look at this before. The main problem with this setup is that it needs to grab JSON schema files from other repo eg: https://github.com/yannh/kubernetes-json-schema, depending on which version of k8s user using. I'm not sure if it's a good idea for sky to download dependencies to the local machine while it's running. Plus, if we want to check pod_config locally using JSON schema, we might need to let users choose their k8s version so we can get the right schema file.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's try the approach you proposed above (check the pod config by using k8s api after combine_pod_config_fields and combine_metadata_fields
) if it can surface the exact errors to the users.
If that does not work, we may need to do schema validation locally. Pod API has been relatively stable, so might not be too bad to have a fixed version schema for validation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM,
BTW i found a error case when i test the approach with json schema in kubernetes-json-schema.
here is my part of test yaml
containers:
- name: local_test
image: test
note, the name here local_test
with _
inside, it's invalid when we creating a pod, but will pass the check by json schema.
and if we use this config to create sky cluster, it will fail later because the invalid name.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
check merged pod_config during launch using k8s api
if there is no kube config in env, ignore ValueError when launch with dryrun. For now, we don't support check schema offline.
The approach has been adjusted. it will check the pod_config using the k8s API at each launch after |
sky/backends/backend_utils.py
Outdated
tmp_yaml_path, dryrun) | ||
if not valid: | ||
raise exceptions.InvalidCloudConfigs( | ||
f'There are invalid config in pod_config, deatil: {message}') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
f'There are invalid config in pod_config, deatil: {message}') | |
f'Invalid pod_config. Details: {message}') |
sky/provision/kubernetes/utils.py
Outdated
body=pod_config, | ||
dry_run='All', | ||
_request_timeout=kubernetes.API_TIMEOUT) | ||
except kubernetes.api_exception() as e: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can kubernetes.api_exception() be caused by reasons other than those not related to invalid config (e.g., insufficient permissions)? In that case, the error message is misleading. For example, I ran into this:
W 12-18 21:53:35 cloud_vm_ray_backend.py:2065] sky.exceptions.ResourcesUnavailableError: Failed to provision on cloud Kubernetes due to invalid cloud config: sky.exceptions.InvalidCloudConfigs: There are invalid config in pod_config, deatil: pods "Unknown" is forbidden: error looking up service account default/skypilot-service-account: serviceaccount "skypilot-service-account" not found
Can we filter the exception further and return valid = False only if the failure is due to invalid pod schema?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If it's hard to filter, here is an alternative implementation (not sure if it works, needs testing):
from typing import Any, Dict, List, Optional
from kubernetes import client
from kubernetes.client.api_client import ApiClient
def validate_pod_config(pod_config: Dict[str, Any]) -> List[str]:
"""Validates a pod_config dictionary against Kubernetes schema.
Args:
pod_config: Dictionary containing pod configuration
Returns:
List of validation error messages. Empty list if validation passes.
"""
errors = []
# Create API client for schema validation
api_client = ApiClient()
try:
# The pod_config can contain metadata and spec sections
allowed_top_level = {'metadata', 'spec'}
unknown_fields = set(pod_config.keys()) - allowed_top_level
if unknown_fields:
errors.append(f'Unknown top-level fields in pod_config: {unknown_fields}')
# Validate metadata if present
if 'metadata' in pod_config:
try:
api_client.sanitize_for_serialization(
client.V1ObjectMeta(**pod_config['metadata'])
)
except (ValueError, TypeError) as e:
errors.append(f'Invalid metadata: {str(e)}')
# Validate spec if present
if 'spec' in pod_config:
try:
api_client.sanitize_for_serialization(
client.V1PodSpec(**pod_config['spec'])
)
except (ValueError, TypeError) as e:
errors.append(f'Invalid spec: {str(e)}')
except Exception as e:
errors.append(f'Validation error: {str(e)}')
return errors
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks ! If sanitize_for_serialization
works, i think this approach is much better than create with dryrun.
sky/backends/backend_utils.py
Outdated
valid, message = kubernetes_utils.check_pod_config( | ||
tmp_yaml_path, dryrun) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Running this on a fresh kubernetes cluster which does not already contain skypilot-service-account
fails:
W 12-18 22:00:56 cloud_vm_ray_backend.py:2065] sky.exceptions.ResourcesUnavailableError: Failed to provision on cloud Kubernetes due to invalid cloud config: sky.exceptions.InvalidCloudConfigs: There are invalid config in pod_config, deatil: pods "Unknown" is forbidden: error looking up service account default/skypilot-service-account: serviceaccount "skypilot-service-account" not found
Note that this service account is created in our downstream provisioning logic in config.py before the pod is provisioned. We may want to move this check there.
sky/provision/kubernetes/utils.py
Outdated
if dryrun: | ||
logger.debug('ignore ValueError as there is no kube config ' | ||
'in the enviroment with dry_run. ' | ||
'For now we don\'t support check pod_config offline.') | ||
return True, None | ||
return False, common_utils.format_exception(e) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This dry run case might not be required if we move the call to check_pod_config
to our downstream logic in provision/kubernetes/instance.py or provision/kubernetes/config.py
sky/provision/kubernetes/utils.py
Outdated
@@ -892,6 +892,53 @@ def check_credentials(context: Optional[str], | |||
return True, None | |||
|
|||
|
|||
def check_pod_config(cluster_yaml_path: str, dryrun: bool) \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For reusability as a general method, we may want to use pod_config dict or V1Pod object as the arg
def check_pod_config(cluster_yaml_path: str, dryrun: bool) \ | |
def check_pod_config(pod_config: Dict) |
sky/provision/kubernetes/utils.py
Outdated
kubernetes.core_api(context).create_namespaced_pod( | ||
namespace, | ||
body=pod_config, | ||
dry_run='All', | ||
_request_timeout=kubernetes.API_TIMEOUT) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
BTW, if we are going for this approach, is there any advantage to doing it here vs directly catching and raising errors when we do the actual create_namespaced_pod call here:
skypilot/sky/provision/kubernetes/instance.py
Lines 585 to 586 in 745cf59
pod = kubernetes.core_api(context).create_namespaced_pod( | |
namespace, pod_spec) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From the results, they should be the same. However, just like we do validation for other configs, we want to expose potential configuration issues at an earlier stage.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is great, thanks @chesterli29! Some minor comments, otherwise good to go.
sky/backends/backend_utils.py
Outdated
valid, message = kubernetes_utils.check_pod_config(pod_config) | ||
if not valid: | ||
raise exceptions.InvalidCloudConfigs( | ||
f'Invalid pod_config. Deatil: {message}') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
f'Invalid pod_config. Deatil: {message}') | |
f'Invalid pod_config. Details: {message}') |
kubernetes: | ||
pod_config: | ||
metadata: | ||
labels: | ||
test-key: test-value | ||
annotations: | ||
abc: def | ||
spec: | ||
containers: | ||
- name: | ||
imagePullSecrets: | ||
- name: my-secret-2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we retain just the relevant kubernetes
field and remove the docker, gcp, nvidia_gpus and other fields?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also can we put a quick comment on what's the invalid field here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, the deserialize
api will ignore other invalid field here, as we can see the implement in the https://github.com/kubernetes-client/python/blob/e10470291526c82f12a0a3405910ccc3f3cdeb26/kubernetes/client/api_client.py#L620
for attr, attr_type in six.iteritems(klass.openapi_types):
if klass.attribute_map[attr] in data:
value = data[klass.attribute_map[attr]]
kwargs[attr] = self.__deserialize(value, attr_type)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah I meant for readability in tests, let's have only this:
experimental:
config_overrides:
kubernetes:
pod_config:
metadata:
labels:
test-key: test-value
annotations:
abc: def
spec:
containers:
- name:
imagePullSecrets:
- name: my-secret-2
except sky.exceptions.ResourcesUnavailableError: | ||
exception_occurred = True |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we also verify the error message?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried to validate this error message, but it's meaningless for this test because the error message returned by _provision
in the end is not directly related to the actual error.
Check pod_config when run 'sky check k8s' by using k8s #4206
This commit extends the functionality of
sky check k8s
by adding a check forpod_config
in this step. The method used to check pod_config is by calling the K8s API. This approach has some advantages and disadvantages:Of course, any other suggestions are welcome for discussion.
The test config.yaml
And the Check Result:
Tested (run the relevant ones):
bash format.sh
pytest tests/test_smoke.py
pytest tests/test_smoke.py::test_fill_in_the_name
conda deactivate; bash -i tests/backward_compatibility_tests.sh