Giter Club home page Giter Club logo

linstor-gateway's Introduction

LINSTOR Logo

LINSTOR Gateway

GitHub release (latest SemVer) GitHub GitHub Workflow Status Slack Channel

LINSTOR Gateway manages highly available iSCSI targets, NFS exports, and NVMe-oF targets by leveraging LINSTOR and drbd-reactor.

Getting Started

For a step-by-step tutorial on setting up a LINSTOR Gateway cluster, refer to this blog post: Create a Highly Available iSCSI Target Using LINSTOR Gateway.

Requirements

LINSTOR Gateway provides a built-in health check that automatically tests whether all requirements are correctly met on the current host.

Simply execute

linstor-gateway check-health

and follow any suggestions that may come up.

Documentation

If you want to learn more about LINSTOR Gateway, here are some pointers for further reading.

Command Line

Help for the command line interface is available by running:

linstor-gateway help

The same information can also be browsed in Markdown format here.

Configuration

LINSTOR Gateway takes a configuration file. See its documentation here.

Internals

The LINSTOR Gateway command line client communicates with the server by using a REST API, which is documented here.

It also exposes a Go client for the REST API: Go Reference

Building

If you want to test the latest unstable version of LINSTOR Gateway, you can build the git version from sources:

git clone https://github.com/LINBIT/linstor-gateway
cd linstor-gateway
make

linstor-gateway's People

Contributors

chrboe avatar liliang-cn avatar miketth avatar philipp-reisner avatar phmarek avatar raltnoeder avatar rck avatar wanzenbug avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

linstor-gateway's Issues

error path does not have a whitelisted parent.

Hi, trying to get nfs exports working on a 3 node proxmox cluster but I'm getting the following error:

linstor-gateway --loglevel debug nfs create --resource=nfstest2 --service-ip=192.168.20.250/24 --allowed-ips=192.168.20.0/24 --resource-group=data1 --size=2G
DEBU[0000] curl -X 'GET' -H 'Accept: application/json' 'http://localhost:3370/v1/files?content=true&limit=0&offset=0'
DEBU[0000] {"name":"data1","select_filter":{}}
DEBU[0000] curl -X 'POST' -d '{"name":"data1","select_filter":{}}
' -H 'Accept: application/json' -H 'Content-Type: application/json' 'http://localhost:3370/v1/resource-groups'
DEBU[0000] Status code not within 200 to 400, but 500 (Internal Server Error)
DEBU[0000] {"resource_definition":{"name":"nfstest2","props":{"DrbdOptions/Resource/auto-promote":"yes","DrbdOptions/Resource/on-no-quorum":"io-error","DrbdOptions/Resource/quorum":"majority","FileSystem/Type":"ext4"},"resource_group_name":"data1"}}
DEBU[0000] curl -X 'POST' -d '{"resource_definition":{"name":"nfstest2","props":{"DrbdOptions/Resource/auto-promote":"yes","DrbdOptions/Resource/on-no-quorum":"io-error","DrbdOptions/Resource/quorum":"majority","FileSystem/Type":"ext4"},"resource_group_name":"data1"}}
' -H 'Accept: application/json' -H 'Content-Type: application/json' 'http://localhost:3370/v1/resource-definitions'
DEBU[0000] {"volume_definition":{"volume_number":1,"size_kib":2097152}}
DEBU[0000] curl -X 'POST' -d '{"volume_definition":{"volume_number":1,"size_kib":2097152}}
' -H 'Accept: application/json' -H 'Content-Type: application/json' 'http://localhost:3370/v1/resource-definitions/nfstest2/volume-definitions'
DEBU[0000] {"select_filter":{}}
DEBU[0000] curl -X 'POST' -d '{"select_filter":{}}
' -H 'Accept: application/json' -H 'Content-Type: application/json' 'http://localhost:3370/v1/resource-definitions/nfstest2/autoplace'
DEBU[0002] {"override_props":{"DrbdOptions/Resource/auto-promote":"no"}}
DEBU[0002] curl -X 'PUT' -d '{"override_props":{"DrbdOptions/Resource/auto-promote":"no"}}
' -H 'Accept: application/json' -H 'Content-Type: application/json' 'http://localhost:3370/v1/resource-definitions/nfstest2'
DEBU[0008] curl -X 'GET' -H 'Accept: application/json' 'http://localhost:3370/v1/resource-definitions/nfstest2'
DEBU[0008] curl -X 'GET' -H 'Accept: application/json' 'http://localhost:3370/v1/view/resources?limit=0&offset=0&resources=nfstest2'
DEBU[0010] {"path":"/etc/drbd-reactor.d/linstor-gateway-nfs-nfstest2.toml","content":"W1twcm9tb3Rlcl1dCiAgaWQgPSAibmZzLW5mc3Rlc3QyIgogIFtwcm9tb3Rlci5yZXNvdXJjZXNdCiAgICBbcHJvbW90ZXIucmVzb3VyY2VzLm5mc3Rlc3QyXQogICAgICBzdGFydCA9IFsib2NmOmhlYXJ0YmVhdDpGaWxlc3lzdGVtIGZzXzEgZGV2aWNlPS9kZXYvZHJiZC9ieS1yZXMvbmZzdGVzdDIvMSBkaXJlY3Rvcnk9L3Nydi9nYXRld2F5LWV4cG9ydHMvbmZzdGVzdDIgZnN0eXBlPWV4dDQgcnVuX2ZzY2s9bm8iLCAib2NmOmhlYXJ0YmVhdDpleHBvcnRmcyBleHBvcnRfMV8wIGNsaWVudHNwZWM9MTkyLjE2OC4yMC4wLzI0IGRpcmVjdG9yeT0vc3J2L2dhdGV3YXktZXhwb3J0cy9uZnN0ZXN0MiBmc2lkPTFkNWM4ZWMxLTQzNmYtNTk5YS05MDM1LTgzMThmMTA0NTNiNyBvcHRpb25zPXJ3IHdhaXRfZm9yX2xlYXNldGltZV9vbl9zdG9wPTEiLCAib2NmOmhlYXJ0YmVhdDpJUGFkZHIyIHNlcnZpY2VfaXAgY2lkcl9uZXRtYXNrPTI0IGlwPTE5Mi4xNjguMjAuMjUwIl0KICAgICAgcnVubmVyID0gInN5c3RlbWQiCiAgICAgIG9uLWRyYmQtZGVtb3RlLWZhaWx1cmUgPSAicmVib290LWltbWVkaWF0ZSIKICAgICAgc3RvcC1zZXJ2aWNlcy1vbi1leGl0ID0gdHJ1ZQogICAgICB0YXJnZXQtYXMgPSAiQmluZHNUbyIK"}
DEBU[0010] curl -X 'PUT' -d '{"path":"/etc/drbd-reactor.d/linstor-gateway-nfs-nfstest2.toml","content":"W1twcm9tb3Rlcl1dCiAgaWQgPSAibmZzLW5mc3Rlc3QyIgogIFtwcm9tb3Rlci5yZXNvdXJjZXNdCiAgICBbcHJvbW90ZXIucmVzb3VyY2VzLm5mc3Rlc3QyXQogICAgICBzdGFydCA9IFsib2NmOmhlYXJ0YmVhdDpGaWxlc3lzdGVtIGZzXzEgZGV2aWNlPS9kZXYvZHJiZC9ieS1yZXMvbmZzdGVzdDIvMSBkaXJlY3Rvcnk9L3Nydi9nYXRld2F5LWV4cG9ydHMvbmZzdGVzdDIgZnN0eXBlPWV4dDQgcnVuX2ZzY2s9bm8iLCAib2NmOmhlYXJ0YmVhdDpleHBvcnRmcyBleHBvcnRfMV8wIGNsaWVudHNwZWM9MTkyLjE2OC4yMC4wLzI0IGRpcmVjdG9yeT0vc3J2L2dhdGV3YXktZXhwb3J0cy9uZnN0ZXN0MiBmc2lkPTFkNWM4ZWMxLTQzNmYtNTk5YS05MDM1LTgzMThmMTA0NTNiNyBvcHRpb25zPXJ3IHdhaXRfZm9yX2xlYXNldGltZV9vbl9zdG9wPTEiLCAib2NmOmhlYXJ0YmVhdDpJUGFkZHIyIHNlcnZpY2VfaXAgY2lkcl9uZXRtYXNrPTI0IGlwPTE5Mi4xNjguMjAuMjUwIl0KICAgICAgcnVubmVyID0gInN5c3RlbWQiCiAgICAgIG9uLWRyYmQtZGVtb3RlLWZhaWx1cmUgPSAicmVib290LWltbWVkaWF0ZSIKICAgICAgc3RvcC1zZXJ2aWNlcy1vbi1leGl0ID0gdHJ1ZQogICAgICB0YXJnZXQtYXMgPSAiQmluZHNUbyIK"}
' -H 'Accept: application/json' -H 'Content-Type: application/json' 'http://localhost:3370/v1/files/%2Fetc%2Fdrbd-reactor.d%2Flinstor-gateway-nfs-nfstest2.toml'
DEBU[0012] curl -X 'GET' -H 'Accept: application/json' 'http://localhost:3370/v1/files?content=true&limit=0&offset=0'
DEBU[0012] curl -X 'POST' -H 'Accept: application/json' 'http://localhost:3370/v1/resource-definitions/nfstest2/files/%2Fetc%2Fdrbd-reactor.d%2Flinstor-gateway-nfs-nfstest2.toml'
DEBU[0013] Status code not within 200 to 400, but 500 (Internal Server Error)
Error: failed to start resources: failed to detach reactor configuration: error attaching file to resource: Message: '(Node: 'prox02') The path /etc/drbd-reactor.d/linstor-gateway-nfs-nfstest2.toml does not have a whitelisted parent. Allowed parent directories: []'; Reports: '[61ED9BD0-85651-000004]' next error: Message: 'Modification of resource definition 'nfstest2' failed due to an unknown exception.'; Details: 'Resource definition: nfstest2'; Reports: '[61ED99AB-00000-000002]'

Error message: (Node: 'prox02') The path /etc/drbd-reactor.d/linstor-gateway-nfs-nfstest2.toml does not have a whitelisted parent. Allowed parent directories: []

ErrorReport-61ED99AB-00000-000002.log

Not sure what I'm doing wrong.

Thank you!

compatible with pacemaker 2.0?

env:

  • ubuntu 20.04
  • pacemaker 2.0
  • corosync 3.0
  • linstor server 1.6.1
  • drbdd 0.1.0

linstor-iscsi start does not work with no error.

linstor-iscsi:
图片

crm configure:
图片

Can't add volume on existing iscsi service

Hello! I create one iscsi resource, it has one LUN with id 1, i want to create another LUN with id 2 based on this IQN and service ip

linstor-gateway iscsi create iqn.2019-08.com.home:lunidtest 192.168.5.7/24 10G --resource-group=hdd-rg-pc-2

before add volume I stopped iscsi iqn

linstor-gateway iscsi stop iqn.2019-08.com.home:lunidtest

Then try to add volume 5gb with LUN id 2

linstor-gateway iscsi add-volume iqn.2019-08.com.home:lunidtest 2 5G

After that i try to start iscsi with 2 LUNs and have errors
Pasted image 20230301162707

How I can add volumes to iscsi correctly? and why adding LUN 2 was killed my exist LUN 1?

nvme create defaults to port_id=0, doesn't link in new subsystems.

Currently all nvme create's will not populate past the first.

The reason being is that the resource-agent responsible for creating the port and linking the subsystems to that port never will reach its code due to "nvmet_port_monitor()".
https://github.com/ClusterLabs/resource-agents/blob/main/heartbeat/nvmet-port#L137

The health check only checks the existence of the directory and if so, doesn't iterate over the nqns:
https://github.com/ClusterLabs/resource-agents/blob/main/heartbeat/nvmet-port#L148

nvmet_port_start() {
	nvmet_port_monitor
	if [ $? =  $OCF_SUCCESS ]; then
		return $OCF_SUCCESS
	fi

	mkdir ${portdir}
	echo ${OCF_RESKEY_addr} > ${portdir}/addr_traddr
	echo ${OCF_RESKEY_type} > ${portdir}/addr_trtype
	echo ${OCF_RESKEY_svcid} > ${portdir}/addr_trsvcid
	echo ${OCF_RESKEY_addr_fam} > ${portdir}/addr_adrfam

	for subsystem in ${OCF_RESKEY_nqns}; do
		ln -s /sys/kernel/config/nvmet/subsystems/${subsystem} \
		   ${portdir}/subsystems/${subsystem}
	done

	nvmet_port_monitor
}
nvmet_port_monitor() {
	[ -d ${portdir} ] || return $OCF_NOT_RUNNING
	return $OCF_SUCCESS
}

I noticed that we don't populate port_id in "/etc/drbd-reactor.d/linstor-gateway-nvmeof-$name.toml"

We only populate:

"ocf:heartbeat:nvmet-port port addr=10.91.230.214 nqns=linbit:nvme:zoo type=tcp"

Desired behavior?
If the user provides a different service address it probably should just automatically take the next available port. This probably should be fixed in linstor-gateway.

If the user provides the same service address it should link it in. This probably should be fixed in resource-agents.

Potentially port_id could be exposed to a user, but probably not necessary.

Can't Create more than 1 NFS per satellite

It appears that in the create function here: https://github.com/LINBIT/linstor-gateway/blob/master/pkg/nfs/nfs.go#L60 we may be erroneously checking for duplicates. (depending the intention) I've added comments to the below code block.

	for _, c := range configs {
               // We check if the incoming create matches any of the configs that we already have.  If they match continue.
		if c.ID == rsc.ID() {
			continue
		}
		for _, r := range c.Resources {
			for _, s := range r.Start {
				if agent, ok := s.(*reactor.ResourceAgent); ok {
                                        //  We are definitely a new incoming configuration at this point... and are going to be an NFS server so why
                                        // error?  Do we only want 1 export per machine here?  I've taken out this line and had more than 1 with 
                                        // normal behavior.				
					if agent.Type == "ocf:heartbeat:nfsserver" {
						return nil, fmt.Errorf("an NFS config with a different ID already exists: %s", c.ID)
					}
				}
			}
		}
	}
[root@drbd-lsc-0 ~]# linstor-gateway nfs create foo 10.91.197.28/32 2G --resource-group=nfs_group
Created export 'foo' at 10.91.197.28:/srv/gateway-exports/foo
[root@drbd-lsc-0 ~]# linstor-gateway nfs create bar 10.91.197.28/32 5G --resource-group=nfs_group
Error: failed to create nfs resource: an NFS config with a different ID already exists: nfs-foo
Usage:
  linstor-gateway nfs create NAME SERVICE_IP SIZE [flags]

Examples:
linstor-gateway nfs create example 192.168.211.122/24 2G
linstor-gateway nfs create restricted 10.10.22.44/16 2G --allowed-ips 10.10.0.0/16


Flags:
      --allowed-ips ip-cidr     Set the IP address mask of clients that are allowed access (default 0.0.0.0/0)
  -p, --export-path string      Set the export path, relative to /srv/gateway-exports (default "/")
  -h, --help                    help for create
  -r, --resource-group string   LINSTOR resource group to use (default "DfltRscGrp")

Global Flags:
      --config string     Config file to load (default "/etc/linstor-gateway/linstor-gateway.toml")
  -c, --connect string    LINSTOR Gateway server to connect to (default "http://localhost:8080")
      --loglevel string   Set the log level (as defined by logrus) (default "info")

failed to create nfs resource: an NFS config with a different ID already exists: nfs-foo
[root@drbd-lsc-0 ~]# lin sp l
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
┊ StoragePool          ┊ Node                                                    ┊ Driver   ┊ PoolName ┊ FreeCapacity ┊ TotalCapacity ┊ CanSnapshots ┊ State ┊ SharedName ┊
╞═════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡
┊ DfltDisklessStorPool ┊ drbd-nfs-1.inst.bruce-dev.us-east-2.bdf-cloud.iqvia.net ┊ DISKLESS ┊          ┊              ┊               ┊ False        ┊ Ok    ┊            ┊
┊ DfltDisklessStorPool ┊ drbd-nfs-2.inst.bruce-dev.us-east-2.bdf-cloud.iqvia.net ┊ DISKLESS ┊          ┊              ┊               ┊ False        ┊ Ok    ┊            ┊
┊ DfltDisklessStorPool ┊ drbd-nfs-3.inst.bruce-dev.us-east-2.bdf-cloud.iqvia.net ┊ DISKLESS ┊          ┊              ┊               ┊ False        ┊ Ok    ┊            ┊
┊ lvmpool              ┊ drbd-nfs-1.inst.bruce-dev.us-east-2.bdf-cloud.iqvia.net ┊ LVM      ┊ lvpool   ┊    47.93 GiB ┊     50.00 GiB ┊ False        ┊ Ok    ┊            ┊
┊ lvmpool              ┊ drbd-nfs-2.inst.bruce-dev.us-east-2.bdf-cloud.iqvia.net ┊ LVM      ┊ lvpool   ┊    47.93 GiB ┊     50.00 GiB ┊ False        ┊ Ok    ┊            ┊
┊ lvmpool              ┊ drbd-nfs-3.inst.bruce-dev.us-east-2.bdf-cloud.iqvia.net ┊ LVM      ┊ lvpool   ┊    47.93 GiB ┊     50.00 GiB ┊ False        ┊ Ok    ┊            ┊
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
[root@drbd-lsc-0 ~]# lin rg l
╭────────────────────────────────────────────────────────────────╮
┊ ResourceGroup ┊ SelectFilter            ┊ VlmNrs ┊ Description ┊
╞════════════════════════════════════════════════════════════════╡
┊ DfltRscGrp    ┊ PlaceCount: 2           ┊        ┊             ┊
╞┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄╡
┊ nfs_group     ┊ PlaceCount: 3           ┊ 0      ┊             ┊
┊               ┊ StoragePool(s): lvmpool ┊        ┊             ┊
╰────────────────────────────────────────────────────────────────╯

nvme volume-add is 56 MiB instead of 10G

Create:

[root@drbd-linstor-0 ~]# lg nvme create -r nvme_group linbit:nvme:demo0 10.91.230.214/32 10G
Created target "linbit:nvme:demo0"

[root@drbd-linstor-0 ~]# lg nvme list
+-------------------+------------------+---------------+-----------+---------------+
|        NQN        |    Service IP    | Service state | Namespace | LINSTOR state |
+-------------------+------------------+---------------+-----------+---------------+
| linbit:nvme:demo0 | 10.91.230.214/32 | Started       |         1 | OK            |
+-------------------+------------------+---------------+-----------+---------------+

Add-Volume:

[root@drbd-linstor-0 ~]# lg nvme stop linbit:nvme:demo0
Stopped target "linbit:nvme:demo0"
[root@drbd-linstor-0 ~]# lg nvme add-volume linbit:nvme:demo0 2 10G
Added volume to "linbit:nvme:demo0"                                                                                                                                                                                                                                                                                        [root@drbd-linstor-0 ~]# lg nvme list
+-------------------+------------------+---------------+-----------+---------------+
|        NQN        |    Service IP    | Service state | Namespace | LINSTOR state |
+-------------------+------------------+---------------+-----------+---------------+
| linbit:nvme:demo0 | 10.91.230.214/32 | Stopped       |         1 | OK            |
|                   |                  | Stopped       |         2 | OK            |
+-------------------+------------------+---------------+-----------+---------------+
[root@drbd-linstor-0 ~]# lg nvme start linbit:nvme:demo0
Started target "linbit:nvme:demo0"

Volume Size:

[root@drbd-linstor-0 ~]# lin vd l
╭───────────────────────────────────────────────────────────────────╮
┊ ResourceName ┊ VolumeNr ┊ VolumeMinor ┊ Size      ┊ Gross ┊ State ┊
╞═══════════════════════════════════════════════════════════════════╡
┊ demo0        ┊ 0        ┊ 1002        ┊ 60 MiB    ┊       ┊ ok    ┊
┊ demo0        ┊ 1        ┊ 1003        ┊ 10.00 GiB ┊       ┊ ok    ┊
┊ demo0        ┊ 2        ┊ 1004        ┊ 56 MiB    ┊       ┊ ok    ┊
┊ milliman     ┊ 0        ┊ 1000        ┊ 60 MiB    ┊       ┊ ok    ┊
┊ milliman     ┊ 1        ┊ 1001        ┊ 10.00 GiB ┊       ┊ ok    ┊
╰───────────────────────────────────────────────────────────────────╯

ISCSI EOF

Hi.

After removing / adding new satellites, the gateway often stops working and gives the following error:

#linstor-gateway iscsi list
Error: Get "http://localhost:8080/api/v2/iscsi": EOF

curl http://localhost:8080/api/v2/iscsi
curl: (52) Empty reply from server

although at the moment there are iscsi-resources that are working and being used

linstor-gateway version 1.0.0

Unable to finalize NFS share creation

Hi,

I have a test cluster with 3 nodes (+1 dedicated node for the controller):

linstor n l
╭─────────────────────────────────────────────────────╮
┊ Node ┊ NodeType  ┊ Addresses               ┊ State  ┊
╞═════════════════════════════════════════════════════╡
┊ lin1 ┊ SATELLITE ┊ 10.16.0.41:3366 (PLAIN) ┊ Online ┊
┊ lin2 ┊ SATELLITE ┊ 10.16.0.42:3366 (PLAIN) ┊ Online ┊
┊ lin3 ┊ SATELLITE ┊ 10.16.0.43:3366 (PLAIN) ┊ Online ┊
╰─────────────────────────────────────────────────────╯
linstor sp l
╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
┊ StoragePool          ┊ Node ┊ Driver   ┊ PoolName ┊ FreeCapacity ┊ TotalCapacity ┊ CanSnapshots ┊ State ┊ SharedName ┊
╞══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡
┊ DfltDisklessStorPool ┊ lin1 ┊ DISKLESS ┊          ┊              ┊               ┊ False        ┊ Ok    ┊            ┊
┊ DfltDisklessStorPool ┊ lin2 ┊ DISKLESS ┊          ┊              ┊               ┊ False        ┊ Ok    ┊            ┊
┊ DfltDisklessStorPool ┊ lin3 ┊ DISKLESS ┊          ┊              ┊               ┊ False        ┊ Ok    ┊            ┊
┊ pool_ssd             ┊ lin1 ┊ ZFS      ┊ DataPool ┊     1.32 GiB ┊      2.81 GiB ┊ True         ┊ Ok    ┊            ┊
┊ pool_ssd             ┊ lin2 ┊ ZFS      ┊ DataPool ┊     1.84 GiB ┊      2.81 GiB ┊ True         ┊ Ok    ┊            ┊
┊ pool_ssd             ┊ lin3 ┊ ZFS      ┊ DataPool ┊     1.15 GiB ┊      2.81 GiB ┊ True         ┊ Ok    ┊            ┊
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
linstor rg l
╭─────────────────────────────────────────────────────────────────╮
┊ ResourceGroup ┊ SelectFilter             ┊ VlmNrs ┊ Description ┊
╞═════════════════════════════════════════════════════════════════╡
┊ DfltRscGrp    ┊ PlaceCount: 2            ┊        ┊             ┊
╞┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄╡
┊ my_ssd_group  ┊ PlaceCount: 2            ┊ 0      ┊             ┊
┊               ┊ StoragePool(s): pool_ssd ┊        ┊             ┊
╰─────────────────────────────────────────────────────────────────╯

And when I create an NFS share, it doesn't complete:

linstor-gateway --loglevel debug nfs create --resource=nfstest1 --service-ip=10.16.0.45/16 --allowed-ips=10.0.0.0/8 --resource-group=my_ssd_group --size=150M
DEBU[0000] curl -X 'GET' -H 'Accept: application/json' 'http://localhost:3370/v1/files?content=true&limit=0&offset=0'
DEBU[0000] {"name":"my_ssd_group","select_filter":{}}
DEBU[0000] curl -X 'POST' -d '{"name":"my_ssd_group","select_filter":{}}
' -H 'Accept: application/json' -H 'Content-Type: application/json' 'http://localhost:3370/v1/resource-groups'
DEBU[0000] Status code not within 200 to 400, but 500 (Internal Server Error)
DEBU[0000] {"resource_definition":{"name":"nfstest1","props":{"FileSystem/Type":"ext4"},"resource_group_name":"my_ssd_group"}}
DEBU[0000] curl -X 'POST' -d '{"resource_definition":{"name":"nfstest1","props":{"FileSystem/Type":"ext4"},"resource_group_name":"my_ssd_group"}}
' -H 'Accept: application/json' -H 'Content-Type: application/json' 'http://localhost:3370/v1/resource-definitions'
DEBU[0000] {"volume_definition":{"volume_number":1,"size_kib":153600}}
DEBU[0000] curl -X 'POST' -d '{"volume_definition":{"volume_number":1,"size_kib":153600}}
' -H 'Accept: application/json' -H 'Content-Type: application/json' 'http://localhost:3370/v1/resource-definitions/nfstest1/volume-definitions'
DEBU[0000] {"select_filter":{}}
DEBU[0000] curl -X 'POST' -d '{"select_filter":{}}
' -H 'Accept: application/json' -H 'Content-Type: application/json' 'http://localhost:3370/v1/resource-definitions/nfstest1/autoplace'
DEBU[0003] curl -X 'GET' -H 'Accept: application/json' 'http://localhost:3370/v1/resource-definitions/nfstest1'
DEBU[0003] curl -X 'GET' -H 'Accept: application/json' 'http://localhost:3370/v1/view/resources?limit=0&offset=0&resources=nfstest1'
DEBU[0003] {"path":"/etc/drbd-reactor.d/linstor-gateway-nfs-nfstest1.toml","content":"W1twcm9tb3Rlcl1dCiAgaWQgPSAibmZzLW5mc3Rlc3QxIgogIFtwcm9tb3Rlci5yZXNvdXJjZXNdCiAgICBbcHJvbW90ZXIucmVzb3VyY2VzLm5mc3Rlc3QxXQogICAgICBzdGFydCA9IFsib2NmOmhlYXJ0YmVhdDpGaWxlc3lzdGVtIGZzXzEgZGV2aWNlPS9kZXYvZHJiZC9ieS1yZXMvbmZzdGVzdDEvMSBkaXJlY3Rvcnk9L3Nydi9nYXRld2F5LWV4cG9ydHMvbmZzdGVzdDEgZnN0eXBlPWV4dDQgcnVuX2ZzY2s9bm8iLCAib2NmOmhlYXJ0YmVhdDpleHBvcnRmcyBleHBvcnRfMV8wIGNsaWVudHNwZWM9MTAuMC4wLjAvOCBkaXJlY3Rvcnk9L3Nydi9nYXRld2F5LWV4cG9ydHMvbmZzdGVzdDEgZnNpZD1iZmYzYmQ5MS02Mzc3LTU2YTYtYjcwMi04ODY3MGFjOWYxYmMgb3B0aW9ucz1ydyB3YWl0X2Zvcl9sZWFzZXRpbWVfb25fc3RvcD0xIiwgIm9jZjpoZWFydGJlYXQ6SVBhZGRyMiBzZXJ2aWNlX2lwIGNpZHJfbmV0bWFzaz0xNiBpcD0xMC4xNi4wLjQ1Il0KICAgICAgcnVubmVyID0gInN5c3RlbWQiCiAgICAgIG9uLXN0b3AtZmFpbHVyZSA9ICJlY2hvIGIgPiAvcHJvYy9zeXNycS10cmlnZ2VyIgogICAgICBzdG9wLXNlcnZpY2VzLW9uLWV4aXQgPSB0cnVlCiAgICAgIHRhcmdldC1hcyA9ICJCaW5kc1RvIgo="}
DEBU[0003] curl -X 'PUT' -d '{"path":"/etc/drbd-reactor.d/linstor-gateway-nfs-nfstest1.toml","content":"W1twcm9tb3Rlcl1dCiAgaWQgPSAibmZzLW5mc3Rlc3QxIgogIFtwcm9tb3Rlci5yZXNvdXJjZXNdCiAgICBbcHJvbW90ZXIucmVzb3VyY2VzLm5mc3Rlc3QxXQogICAgICBzdGFydCA9IFsib2NmOmhlYXJ0YmVhdDpGaWxlc3lzdGVtIGZzXzEgZGV2aWNlPS9kZXYvZHJiZC9ieS1yZXMvbmZzdGVzdDEvMSBkaXJlY3Rvcnk9L3Nydi9nYXRld2F5LWV4cG9ydHMvbmZzdGVzdDEgZnN0eXBlPWV4dDQgcnVuX2ZzY2s9bm8iLCAib2NmOmhlYXJ0YmVhdDpleHBvcnRmcyBleHBvcnRfMV8wIGNsaWVudHNwZWM9MTAuMC4wLjAvOCBkaXJlY3Rvcnk9L3Nydi9nYXRld2F5LWV4cG9ydHMvbmZzdGVzdDEgZnNpZD1iZmYzYmQ5MS02Mzc3LTU2YTYtYjcwMi04ODY3MGFjOWYxYmMgb3B0aW9ucz1ydyB3YWl0X2Zvcl9sZWFzZXRpbWVfb25fc3RvcD0xIiwgIm9jZjpoZWFydGJlYXQ6SVBhZGRyMiBzZXJ2aWNlX2lwIGNpZHJfbmV0bWFzaz0xNiBpcD0xMC4xNi4wLjQ1Il0KICAgICAgcnVubmVyID0gInN5c3RlbWQiCiAgICAgIG9uLXN0b3AtZmFpbHVyZSA9ICJlY2hvIGIgPiAvcHJvYy9zeXNycS10cmlnZ2VyIgogICAgICBzdG9wLXNlcnZpY2VzLW9uLWV4aXQgPSB0cnVlCiAgICAgIHRhcmdldC1hcyA9ICJCaW5kc1RvIgo="}
' -H 'Accept: application/json' -H 'Content-Type: application/json' 'http://localhost:3370/v1/files/%2Fetc%2Fdrbd-reactor.d%2Flinstor-gateway-nfs-nfstest1.toml'
DEBU[0004] curl -X 'GET' -H 'Accept: application/json' 'http://localhost:3370/v1/files?content=true&limit=0&offset=0'
DEBU[0004] curl -X 'POST' -H 'Accept: application/json' 'http://localhost:3370/v1/resource-definitions/nfstest1/files/%2Fetc%2Fdrbd-reactor.d%2Flinstor-gateway-nfs-nfstest1.toml'
DEBU[0004] curl -X 'GET' -H 'Accept: application/json' 'http://localhost:3370/v1/view/resources?limit=0&offset=0&resources=nfstest1'
DEBU[0007] curl -X 'GET' -H 'Accept: application/json' 'http://localhost:3370/v1/view/resources?limit=0&offset=0&resources=nfstest1'
DEBU[0010] curl -X 'GET' -H 'Accept: application/json' 'http://localhost:3370/v1/view/resources?limit=0&offset=0&resources=nfstest1'
DEBU[0013] curl -X 'GET' -H 'Accept: application/json' 'http://localhost:3370/v1/view/resources?limit=0&offset=0&resources=nfstest1'
DEBU[0016] curl -X 'GET' -H 'Accept: application/json' 'http://localhost:3370/v1/view/resources?limit=0&offset=0&resources=nfstest1'
DEBU[0019] curl -X 'GET' -H 'Accept: application/json' 'http://localhost:3370/v1/view/resources?limit=0&offset=0&resources=nfstest1'
DEBU[0022] curl -X 'GET' -H 'Accept: application/json' 'http://localhost:3370/v1/view/resources?limit=0&offset=0&resources=nfstest1'
DEBU[0025] curl -X 'GET' -H 'Accept: application/json' 'http://localhost:3370/v1/view/resources?limit=0&offset=0&resources=nfstest1'
DEBU[0028] curl -X 'GET' -H 'Accept: application/json' 'http://localhost:3370/v1/view/resources?limit=0&offset=0&resources=nfstest1'
DEBU[0031] curl -X 'GET' -H 'Accept: application/json' 'http://localhost:3370/v1/view/resources?limit=0&offset=0&resources=nfstest1'
Error: failed to start resources: error waiting for resource to become used: context deadline exceeded
Usage:
  linstor-gateway nfs create [flags]

Examples:
linstor-gateway nfs create --resource=example --service-ip=192.168.211.122/24 --allowed-ips=192.168.0.0/16 --resource-group=ssd_thin_2way --size=2G

Flags:
      --allowed-ips ip-cidr     Set the IP address mask of clients that are allowed access (default ::1/64)
  -p, --export-path string      Set the export path (default "/")
  -h, --help                    help for create
  -r, --resource string         Set the resource name (required)
  -g, --resource-group string   Set the LINSTOR resource group name
      --service-ip ip-cidr      Set the service IP and netmask of the target (required) (default ::1/64)
      --size unit               Set a size (e.g, 1TiB) (default 1GiB)

Global Flags:
      --config string         Config file to load (default "/etc/linstor-gateway/linstor-gateway.toml")
      --controllers strings   List of LINSTOR controllers to try to connect to (default from $LS_CONTROLLERS, or localhost:3370)
      --loglevel string       Set the log level (as defined by logrus) (default "info")

failed to start resources: error waiting for resource to become used: context deadline exceeded

I finally get it working after rebooting my nodes or restart drbd-reactor service.

linstor-gateway nfs list
+----------+---------------+---------------+-------------------------------+---------------+
| Resource |  Service IP   | Service state |          NFS export           | LINSTOR state |
+----------+---------------+---------------+-------------------------------+---------------+
| nfstest1 | 10.16.0.45/16 | Started       | /srv/gateway-exports/nfstest1 | OK            |
| nfstest2 | 10.16.0.46/16 | Started       | /srv/gateway-exports/nfstest2 | OK            |
+----------+---------------+---------------+-------------------------------+---------------+

Can you help me debug ?

Thank you

Why is the iptables drop 2049 port rule automatically added when using linstor-gateway to export nfs services?

I found that my nfs export could not be mounted using virtual ip on the client, but the shared directory could indeed be discovered using showmount -e.
I can mount nfs normally using the IP of the physical network card.
I checked all the configurations until I accidentally checked iptables and found that a rule to drop port 2049 was automatically added.
This rule prevented me from using it. After I deleted it, I found that it could be done immediately. Use, why do this?

root@lab-pve1:~# iptables -vnL Chain INPUT (policy ACCEPT 172K packets, 45M bytes) pkts bytes target prot opt in out source destination 0 0 DROP 6 -- * * 0.0.0.0/0 192.168.128.30 multiport dports 2049

root@lab-pve3:~# linstor-gateway nfs list +----------+-------------------+--------------------+--------------------------+---------------+ | Resource | Service IP | Service state | NFS export | LINSTOR state | +----------+-------------------+--------------------+--------------------------+---------------+ | nfs | 192.168.128.30/32 | Started (lab-pve1) | /srv/gateway-exports/nfs | OK | +----------+-------------------+--------------------+--------------------------+---------------+

Implement confirmation dialog when deleting

Simple but important feature: When doing i.e. linstor-gateway nfs delete my-nfs-export, there should be some confirmation. My first impression when testing this was that the export would be deleted, but data would remain. But everything is deleted. I want some guards protecting me from doing destructive actions!

v1.3.0 - unable to get iscsi tgt implementation working

I'm testing linstor-gateway v1.3.0 and the new feature for iscsi tgt implementation, however I'm unable to get it working. Another user seem to have reported a similar issue in drbd-user mailing list...

https://lists.linbit.com/pipermail/drbd-user/2023-October/026493.html

Please also see attached log file.

linstor-gateway -v
linstor-gateway version 1.3.0
dpkg -l|grep tgt
ii  tgt                                  1:1.0.85-1                          amd64        Linux SCSI target user-space daemon and tools

linstor-gateway iscsi create iqn.2019-08.com.linbit:iscsi-lun1 10.10.3.10/24 10G --resource-group iscsi-group --implementation tgt

linstor-gateway iscsi list
+-----------------------------------+---------------+----------------+-----+---------------+
|                IQN                |  Service IP   | Service state  | LUN | LINSTOR state |
+-----------------------------------+---------------+----------------+-----+---------------+
| iqn.2019-08.com.linbit:iscsi-lun1 | 10.10.3.10/24 | Started (pve1) |   1 | OK            |
+-----------------------------------+---------------+----------------+-----+---------------+

Resource is moving between the nodes, but never starts properly. HA VIP also does not respond to ping requests.

root@pve1:~# drbdadm status
iscsi-lun1 role:Secondary
  volume:0 disk:Diskless
  volume:1 disk:Diskless
  pve2 role:Primary
    volume:0 peer-disk:UpToDate
    volume:1 peer-disk:UpToDate
  pve3 role:Secondary
    volume:0 peer-disk:UpToDate
    volume:1 peer-disk:UpToDate

root@pve1:~# drbdadm status
iscsi-lun1 role:Secondary
  volume:0 disk:Diskless
  volume:1 disk:Diskless
  pve2 role:Secondary
    volume:0 peer-disk:UpToDate
    volume:1 peer-disk:UpToDate
  pve3 role:Secondary
    volume:0 peer-disk:UpToDate
    volume:1 peer-disk:UpToDate

lgtgterr.log

Connection to a remote LINSTOR controller possible ?

Hello,

Was testing linstor-gateway on 2 nodes which both act only as linstor-satellites (none of them acts as controller). Controller runs on 3 different nodes and it's managed by drbd-reactor on those 3 nodes for HA. In order for the 2 linstor gateway nodes to be able to locate the controller I use the following environment variable..

LS_CONTROLLERS=linctrl01,linctrl02,linctrl03

I'm able to do linstor n l and I've also added the 2 linstor-gateway nodes as satellites, so everything looks good, until I try to create an iscsi target. It seems that the linstor-gateway expects the controller to be listening locally on port 3370 and hence it fails.. Is it possible to instruct the linstor-gateway to connect to the remote controller instead ? ... or it's mandatory to have linstor-controller configured on same 2 linstor-gateway hosts managed by drbd-reactor for HA ?

root@linstor-gw-01[05:11 PM]:~ linstor-gateway iscsi create iqn.2019-08.com.linbit:example 10.10.12.20/24 20G --username=foo --password=bar --resource-group=iscsi_group

Error: failed to create iscsi resource: failed to check for existing config: failed to fetch file list: Get "http://localhost:3370/v1/files?content=true&limit=0&offset=0": dial tcp 127.0.0.1:3370: connect: connection refused
Usage:
  linstor-gateway iscsi create IQN SERVICE_IPS [VOLUME_SIZE]... [flags]

Examples:
linstor-gateway iscsi create iqn.2019-08.com.linbit:example 192.168.122.181/24 2G

Flags:
      --allowed-initiators strings   Restrict which initiator IQNs are allowed to connect to the target
  -h, --help                         help for create
  -p, --password string              Set the password to use for CHAP authentication
  -g, --resource-group string        Set the LINSTOR resource group (default "DfltRscGrp")
  -u, --username string              Set the username to use for CHAP authentication

Global Flags:
      --config string     Config file to load (default "/etc/linstor-gateway/linstor-gateway.toml")
  -c, --connect string    LINSTOR Gateway server to connect to (default "http://localhost:8080")
      --loglevel string   Set the log level (as defined by logrus) (default "info")

failed to create iscsi resource: failed to check for existing config: failed to fetch file list: Get "http://localhost:3370/v1/files?content=true&limit=0&offset=0": dial tcp 127.0.0.1:3370: connect: connection refused

iscsi add-volume creates incorrect size.

iscsi add-volume always(?) creates a volume of 57344k.

$ linstor-gateway --version
linstor-gateway version 0.12.1
$ linstor-gateway iscsi create iqn.2019-08.com.test:test 192.168.0.92/24 1G --username=test --password=test12345678 --resource-group=iscsi_group
Created iSCSI target 'iqn.2019-08.com.test:test'
$ linstor-gateway iscsi stop iqn.2019-08.com.test:test
Stopped target "iqn.2019-08.com.test:test"
$ linstor-gateway iscsi add-volume iqn.2019-08.com.test:test 2 2G
Added volume to "iqn.2019-08.com.test:test"
$ linstor-gateway iscsi add-volume iqn.2019-08.com.test:test 2 2G
Error: failed to add volume to resource: existing volume has differing size 57344 != 2097152

skipping a lun (create, stop, add-volume 3) returns an error but does create the correct size. creating 2 then 3 produces two 57meg luns. The initial volume is sized mostly-correctly, see below.

Minor related issue, even the "correct" partition sizes are not quite right:

$ linstor-gateway iscsi add-volume iqn.2019-08.com.test:test 3 2G
Error: failed to add volume to resource: existing volume has differing size 2093056 != 2097152

Username and password are not actually used

I haven't investigated this too much, all I know is that if you try to use the target with CHAP authentication disabled it just works. IIRC the "credentials" lio thing is also not being set correctly

Using linstor-gateway with an SSL key-secured controller

  • cluster info
$ linstor n l -p
+----------------------------------------------------------+
| Node      | NodeType | Addresses                | State  |
|==========================================================|
| vc-swarm1 | COMBINED | 192.168.90.21:3367 (SSL) | Online |
| vc-swarm2 | COMBINED | 192.168.90.22:3367 (SSL) | Online |
| vc-swarm3 | COMBINED | 192.168.90.23:3367 (SSL) | Online |
+----------------------------------------------------------+
  • test SSL api
$ curl -s --cert /etc/linstor/ssl/clients.uncrypted.pem \
              --key /etc/linstor/ssl/clients.uncrypted.pem \
              --cacert /etc/linstor/ssl/ca.crt --http1.1 \
              --insecure https://192.168.90.21:3371/v1/controller/version | jq
{
  "version": "0.1",
  "git_hash": "07890a5c51382267c7015a07a9c5b4a9ee9a0ae8",
  "build_time": "2023-03-17T23:16:03+00:00",
  "rest_api_version": "1.17.0"
}
  • linstor-gateway config
$ cat /etc/linstor-gateway/linstor-gateway.toml
linstor.controllers = ["https://192.168.90.21:3371","https://192.168.90.22:3371","https://192.168.90.23:3371"]
  • check health
$ linstor-gateway check-health
[!] LINSTOR
    ✗ No connection to a LINSTOR controller
      Get "https://192.168.90.21:3371/v1/controller/version": x509: cannot validate certificate for 192.168.90.21 because it doesn't contain any IP SANs
      Make sure that either
      • the --controllers command line option, or
      • the LS_CONTROLLERS environment variable, or
      • the linstor.controllers key in your configuration file (/etc/linstor-gateway/linstor-gateway.toml)
      contain an URL to a LINSTOR controller, or that the LINSTOR controller is running on this machine.
[✓] drbd-reactor
[✓] Resource Agents
[✓] iSCSI
[✓] NVMe-oF
[✓] NFS

FATA[0000] Health check failed: found 1 issues 

HACK Solution (use an revserse proxy)

$ cat /etc/linstor-gateway/linstor-gateway.toml
linstor.controllers = ["http://127.0.0.1:3369"]

$ cat /etc/haproxy/haproxy.cfg
...
frontend LINSTOR-IN
    bind            127.0.0.1:3369
    mode            http
    log             global
    option          http-keep-alive
    default_backend LINSTOR-CONTROLLERS
 
backend LINSTOR-CONTROLLERS
    mode                http
    timeout connect     30s
    timeout server      30s
    retries             3
    option              httpchk OPTIONS /health
    server       vc-swarm1 192.168.90.21:3371 ssl check inter 5s verify none crt /etc/linstor/ssl/clients.uncrypted.pem ca-file /etc/linstor/ssl/clients.uncrypted.pem
    server       vc-swarm2 192.168.90.22:3371 ssl check inter 5s verify none crt /etc/linstor/ssl/clients.uncrypted.pem ca-file /etc/linstor/ssl/clients.uncrypted.pem
    server       vc-swarm3 192.168.90.23:3371 ssl check inter 5s verify none crt /etc/linstor/ssl/clients.uncrypted.pem ca-file /etc/linstor/ssl/clients.uncrypted.pem
  • check health
$ linstor-gateway check-health
[✓] LINSTOR
[✓] drbd-reactor
[✓] Resource Agents
[✓] iSCSI
[✓] NVMe-oF
[✓] NFS

NFS Export with xfs filesystem

Hi,

With this configuration

$ linstor rg lp nfs_group -p
+-------------------------+
| Key             | Value |
|=========================|
| FileSystem/Type | xfs   |
+-------------------------+

$ linstor-gateway nfs create shares 10.20.117.254/32 10G --allowed-ips=10.20.117.0/24 --resource-group=nfs_group
Created export 'shares' at 10.20.117.254:/srv/gateway-exports/shares

linstor-gateway generate a drbd-reactor config file for an ext4 filesystem:

$ cat /etc/drbd-reactor.d/linstor-gateway-nfs-shares.toml | grep ext4
        "ocf:heartbeat:Filesystem fs_cluster_private device=/dev/drbd/by-res/shares/0 directory=/srv/ha/internal/shares fstype=ext4 run_fsck=no",
        "ocf:heartbeat:Filesystem fs_1 device=/dev/drbd/by-res/shares/1 directory=/srv/gateway-exports/shares fstype=ext4 run_fsck=no",

with FileSystem/Type ext4 setting all are fine

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.