7 Known Issues

This chapter contains information about known issues and limitations in this release.

Builder Utility Format 3 Unsupported

The Builder Utility can't build using format 3. Use format 1 or 2 instead.

Fail to Delete Namespace

A namespace can't be deleted if a collection is first uploaded to the namespace, then denied approval placing the collection in the rejected repository. If you try to delete the collection, the following error message appears:
Namespace "<namespace_name>" could not be deleted.
Error 400 - Bad Request: The server was unable to complete your request

In the previous example, <namespace_name> can be the name of any namespace. This error persists even if you delete the colletion in the rejected repository.

Workaround: Do the following:
  1. Log in to the Private Automation Hub server.
  2. Log in as the pulp user.
    su -l pulp -s /bin/bash
  3. Run the following command:
    pulpcore-manager shell
    >>> from galaxy_ng.app.models.namespace import Namespace
    >>> Namespace.objects.filter(name="<namespace_name>").delete()
    (2,
    {'galaxy.CollectionImport': 1, 'galaxy.Namespace': 1})
    For example, the following command deletes the oracle namespace.
    pulpcore-manager shell
    >>> from galaxy_ng.app.models.namespace import Namespace
    >>> Namespace.objects.filter(name="oracle").delete()
    (2,
    {'galaxy.CollectionImport': 1, 'galaxy.Namespace': 1})

Job Status Change After Node Restart

When an execution plane node restarts while a job is running on the node, the control plane loses the node and reports the following job status:
Job reaped due to instance shutdown
In some cases, when the execution plane node is recovered, the failed job status description may change to the following:
JSON Failed to JSON parse a line from worker stream. Error: Expecting value: line 1 column 1 (char 0)
Line with invalid JSON data: b''

This occurs because the control node tries to get a status message from the failed job and this is the error message that's returned. You can ignore the new status for the failed job. Consider it as failed and restart if needed.

Container Error After Execution Node Restarts

If an execution plane node restarts while a job is running, the next job run on the node after it recovers could display an error in the stdout output for the playbook similar to the following:
ERRO[0000] Refreshing container a16a37e423495ba0f5f10644617bf9b0ac874a8aae5ff9cdd43f3269fc3f1ac6: retrieving temporary directory for container a16a37e423495ba0f5f10644617bf9b0ac874a8aae5ff9cdd43f3269fc3f1ac6: no such container 

This error occurs because after the node restarts, Podman cannot find data for the directory of containers that were running before the node restarted. You can ignore the error message because it appears only on the first job after the node restarts. Subsequent jobs will not have this error.

Topology Viewer Download Bundle Fails

The download bundle function on the topology viewer feature returns the following error message:
"A server error has occurred."
This issue is being investigated.