Menu

Agent--InSpur Cloud

Download

Agent--InSpur Cloud

The backup agent configuration for Inspur Cloud needs to follow the steps for the OpenStack backup agent in a production environment. Due to Keystone's API returning multiple endpoints for the same module, it is necessary to determine which endpoint each module should use through configuration files.

Prerequisites:

Configure the virtualization platform according to the manual: [Agent--OpenStack]

Modify the Endpoint Configuration File

Get the Endpoint List

Access the Keystone authentication interface:http://<keystone_ip>:<keystone_port>/<api_version>/auth/tokensExample of the authentication interface:http://192.168.110.5:5000/v3/auth/tokensThis will provide the list of endpoints for the cloud platform.

Modify the Configuration File

For versions 3.1.x and later
The configuration file is located at: ${unispace_base_dir}/bin/virt/config/endpoint_config.json
For versions prior to 3.1.x
The configuration file is located at: ${unispace_base_dir}/bin/openstack/config/endpoint_config.json

The following configuration is used in the actual environment. Generally, Inspur Cloud can use the configuration below without additional settings. In special cases, you can configure it according to the actual query results.

Copy
{
        "nova":{
                "name": "nova",
                "type": "compute",
                "version": "",
                "interface_name": "public",
                "mapping": {}
        },
        "cinder":{
                "name": "cinderv3",
                "type": "volumev3",
                "version": "",
                "interface_name": "public",
                "mapping": {}
        },
        "glance":{
                "name": "glance",
                "type": "image",
                "version": "",
                "interface_name": "public",
                "mapping": {}
        },
        "neutron":{
                "name": "neutron",
                "type": "network",
                "version": "",
                "interface_name": "public",
                "mapping": {}
        }
}

Parameter Description

name:Module name. The system determines the corresponding module's endpoint based on the name.
type:Module type. The system determines the corresponding module's endpoint based on the type.
version:Endpoint version. The system will use this version number among the supported endpoint versions. By default, the system will take the latest supported endpoint version for the module.
interface_name:Endpoint interface type. By default, it will prioritize internal. In the Inspur Cloud environment, it should be configured as public.
mapping:Endpoint host mapping. If the configured host is not directly accessible, this setting can map it to a host that can be accessed by the proxy node. In the Inspur Cloud environment, this configuration is not required.

Modify the Hosts File

The host's hostname needs to be mapped to the loopback address 127.0.0.1.
After confirming the endpoint used by each module in the previous step, if the access node cannot resolve the endpoint's address, add the endpoint address to the /etc/hosts file on the access node.

Example

Taking the volume endpoint as an example:

Get the Endpoint List

Obtain the endpoint list from:http://<keystone_ip>:<keystone_port>/<api_version>/auth/tokens

curl --location 'http://192.168.110.5:5000/v3/auth/tokens' \ Copy
--header 'Content-Type: application/json' \
--data '{
    "auth": {
        "identity": {
            "methods": [
                "password"
            ],
            "password": {
                "user": {
                    "name": "username",
                    "domain": {
                        "id": "default"
                    },
                    "password": "password"
                }
            }
        }
    }
}'

In the returned result, find the volume entry. In this example, two versions of the endpoint are returned, each having three types of interfaces.

json Copy
{
    "token": {
        "is_domain": false,
      
        "catalog": [
             ...
            {
                "endpoints": [
                    {
                        "url": "http://public-ip:8776/v3/2a70276a202b4a419ad0c46c6de619af",
                        "interface": "public"                    
                    },
                    {
                        "url": "http://admin-ip:8776/v3/2a70276a202b4a419ad0c46c6de619af",
                        "interface": "admin",
                    },
                    {
                        "url": "http://internal-ip:8776/v3/2a70276a202b4a419ad0c46c6de619af",
                        "interface": "internal"
                    }
                ],
                "type": "volumev3",
                "name": "cinderv3"
            },
            {
                "endpoints": [
                    {
                        "url": "http://admin-ip:8776/v2/2a70276a202b4a419ad0c46c6de619af",
                        "interface": "admin"
                    },
                    {
                        "url": "http://public-ip:8776/v2/2a70276a202b4a419ad0c46c6de619af"
                        "interface": "public"
                    },
                    {
                        "url": "http://internal:8776/v2/2a70276a202b4a419ad0c46c6de619af",
                        "interface": "internal"
                    }
                ],
                "type": "volumev2",
                "name": "cinderv2"
            }
        ]
    }
}

Read the Configuration File

The configuration file specifies the volume module's endpoint as:
name:cinderv3
type:volumev3
interface_name:public

"cinder":{ Copy
            "name": "cinderv3",
            "type": "volumev3",
            "version": "",
            "interface_name": "public",
            "mapping": {}
},

Matching the above conditions in the endpoint list, the endpoint URL is:
http://public-ip:8776/v3/2a70276a202b4a419ad0c46c6de619af

Modify the Hosts File

If the access node cannot directly resolve the public-ip, add the resolution of public-ip to the /etc/hosts file.

Share this Article
Previous
Agent--Gbase
Next
Recovery Plan
Last modified: 2026-03-30