scaleway / docs-content Goto Github PK
View Code? Open in Web Editor NEWScaleway Documentation contents
Home Page: https://www.scaleway.com/en/docs/
Scaleway Documentation contents
Home Page: https://www.scaleway.com/en/docs/
How to configure and use Github Actions to build Docker images and push them into Scaleway Container Registry.
Outline:
Github Actions are becoming increasingly popular to create continuous integration pipelines. Combined with Scaleway Container Registry, they allow the users to build their own Docker images and store them into the registry for further use through other managed resources.
Yes
n/a
https://www.scaleway.com/en/docs/tutorials/traefik-v2-cert-manager/
Creating a wildcard DNS record and pointing your domain name to the IP address
references an IP address, but it is not clear where this IP comes from.
We will be using the new Domains and DNS product, available on Scaleway, to create a wildcard record pointing to ----> this IP <---- address (the domain used in this tutorial will be βmytest.comβ). A wildcard record (*.mydomain.com) allows you to point any sub-domain of your domain to the configured IP address.
It would be better to describe where we could find the IP. Does it come from your external load balancers? Or is it the external IP of the node itself?
For this tutorial, it could be the public IPv4 address of the node in the k8s cluster, but in general I'd love to see how this works with your external load balancers without having to spend much time on figuring this out :D
No response
No response
I couldn't find a starting point to retrieve Kapsule metrics in the new Observability Cockpit. Leveraging Fluent Bit as an observability pipeline allows to push all the needed metrics into the Cockpit and visualize them in Grafana.
Yes
No response
4993020b-9474-4a6b-bd92-f75236709147
https://www.scaleway.com/en/docs/changelog/rss.xml
The generated RSS feed for the Changelog page is missing publication dates for entries.
Cause: item
elements are all missing a pubDate
attribute.
Consequence: when importing the feed into a reader, history is lost as there is no dates provided, and all sorts of deduplication issues appear.
Reference: https://www.rssboard.org/rss-specification#ltpubdategtSubelementOfLtitemgt
No response
No response
https://www.scaleway.com/en/docs/network/vpc/troubleshooting/autoconfig-not-working/
It is suggested to use the following install command to install scaleway-ecosystem on RedHat-based distributions:
# rpm -vUh https://github.com/scaleway/scaleway-packages/releases/download/v0.0.4/scaleway-ecosystem-0.0.5.noarch.rpm
However, the version in this repository is not up-to-date (see: https://github.com/scaleway/scaleway-packages/releases), not to say it's very old (2021).
I guess it would be better to give a procedure to install scaleway-ecosystem through yum (and 0.0.6 is available on Scaleway yum repo), that will moreover facilitate future upgrades of the package.
No response
No response
In the documentation, some part mentions that we can include some variables in the IoT hub route configuration Such as here.
"As we are not limited to binary payloads, in this example we will use MySQL functions to manipulate $TOPIC and $PAYLOAD placeholders."
It would be great to details all the available variables. For example if '$DEVICEID' is available or not as their type.
I would love to write the doc myself but I do not know the missing variables.
No
No response
No response
No response
Hello,
If it's okay, I'd like to make a suggestion for the documentation. If that is not appreciated, do not hesitate to just tell me where or how I can make a suggestion and to delete my issue.
I have been a long time customer of Scaleway Dedibox and recently decided to seriously try out Scaleway Elements. I have found the documention on Dedibox to always be quite good, at least enough to get stuff working. I have been trying to see if Elements has such good support for IPv6 as well and I have to say, I am slightly disappointed in the documentation. Even IPv4 is barely mentioned at all. I'd love to see more on how IPv6 routing happens on the Elements network, since that's a part that I am particularly interested in.
As a little background, this is what I have been running into:
I have been running a VPN server with full IPv6 support for a while on my Dedibox machine. My VPN clients are given a private IPv4 address and the IPv4 traffic is NATed through the server, while the clients also receive a fully public IPv6 address from my given /56 range there and that is also forwarded right through my server. I'd love to see if I can do something similar with my /64 address on Elements, but I haven't been able to get it working.
For this to work, I need a little more information on how IPv6 is even routed on the Elements network and I cannot be the only one if I am honest. A little more background would be appreciated. The only information I have ever been able to find was this blog post on how you have been rolling it out now. That doesn't seem like the best kind of documentation
Where is the good old RESTful API?
The present document says: "You can create and manage your Instances from the console, via the API or via the Scaleway Command Line Interface."
But I am unable to find the documentation for API.
I need to automate such operations as below:
No
No response
No response
No response
https://www.scaleway.com/en/docs/tutorials/backup-dedicated-server-s3-duplicity/#backup-script
Not sure this is the good repo to raise the issue as it's not about content, but I didn't found where to do it.
The backup script should be in one part, whereas it is divided in two parts.
On the github renderer version, there is no problem
There is also a problem with the render of the note following backup script
No response
No response
https://www.scaleway.com/en/docs/tutorials/openvpn-instant-app/
Point 3 of intro references nextcloud ("Choose the NextCloud image in the InstantApps tab")
This has nothing to do with nextcloud.
It should reference openvpn.
No response
No response
How to make continuous deployment of your scaleway container with github action on your repository.
Many users may want to eneble auto-deploy for their Docker Instances. This is actually not that hard to do with Github Actions, a little blog can do the job.
Yes
No response
a40dfadf-7242-4312-8b67-1c2536386ce5
Hi, I've just setup an Ubuntu 20 LTS on an ESXi with your servers, they are some errors/mistakes at least for this distribution
Anyway thanks for documentation ;)
Currently, there is no information on which instance types you can use in your pool (according to the docs and to the support team).
I would like to add some information on that by changing this
A pool is a set of identical Nodes. A pool has a name, a size (its current number of nodes), nodes number limits (min and max), and a Scaleway Instance type. Changing these limits increases/decreases the size of a pool. Thus, when autoscaling is enabled, the pool will grow or shrink inside those limits, depending on its load.
To
A pool is a set of identical Nodes. A pool has a name, a size (its current number of nodes), nodes number limits (min and max), and a Scaleway Instance type (minimum DEV1-M). Changing these limits increases/decreases the size of a pool. Thus, when autoscaling is enabled, the pool will grow or shrink inside those limits, depending on its load.
We migrate our infrastructure to a European cloud provider. While setting up testing infrastructures with terraform, I was getting error messages: scaleway-sdk-go: invalid argument(s): node_type does not respect constraint, this node type is not available in this zone
This led me to investigate for which node types are available in which zone. After some time, we (the support team and I) found out that it has nothing to do with the zone (so probably we want to replace this error message as well), but with the node_type itself. You can only use DEV1-M and higher in your pool.
Yes
No response
09e5de46-6743-41e4-985f-6d06176c9ae8
Hi there.
Would you consider adding some information to the docs for your new elastic-metal product about what info is provided in terms of power usage used by a server?
Below are the docs I'm referring to:
https://www.scaleway.com/en/docs/faq/elastic-metal/
And I'm asking in the context of using something like Scaphandre on it, to allow me to report, as a customer for location based carbon the emissions used by my compute:
https://github.com/hubblo-org/scaphandre
For this to be possible, I think the processors need to expose information recorded by the RAPL sensor as documented here:
BTW, I'm aware of Scaleway's environmental leadership page below, and I'm a customer because of it, but I know that when I am running workloads in Poland or the Netherlands, there will be location based carbon emissions I'd like to track.
https://www.scaleway.com/en/environmental-leadership/
Thanks!
https://www.scaleway.com/en/docs/compute/kubernetes/api-cli/using-scaleway-operator/#development
The github project for the scaleway-operator
indicates it's in public archive state.
However the documentation does not mention this archived state and still mention the link to the archived project.
Should it be considered deprecated? Or is development happening elsewhere ?
No response
No response
The documentation is for installing and configuring Zigbee2mqtt with IoT Hub. The tutorial will also develop a simple use case related to visualization with Observability. If successful, the first tutorial would be extended to document use cases leveraging other Scaleway products.
Some people would like to have full control over the management of their connected devices which are used to monitor and control their environment. Zigbee 3.0 is one of the recognized technologies used for communication between these devices, but proprietary gateways are often required to interact with such a network. Open source software and managed services from trusted providers could be used instead to ensure an appropriate level of security and performance.
Yes
e52e3a61-6e5f-4ba6-babf-0fe4500d1f32
No response
I tried connecting with Vinagre and Remmina but wasn't able to connect with either of those clients. I finally managed to connect using VNC Viewer by Real VNC. It would be great if you could add docs on how to connect using Vinagre, Remmina or some other FOSS VNC client.
Not all VNC clients seem to work with the documentation provided.
No
No response
No response
No response
This documentation is about configuring the ingress-nginx
to accept proxy-protocol-v2
communication, here's the Slack threads about it :
Tutorial to setup the ingress-nginx
to accept proxy-protocol-v2
communication
Yes
You can send me a message on Slack, I'll send you my org-id if needed
@Grraahaam (Slack - Scaleway Community)
https://www.scaleway.com/en/docs/faq/kubernetes#what-is-scaleway-kubernetes-kapsule
https://blog.scaleway.com/multi-az-building-the-next-step-for-infrastructure-resiliency/ announced introduction of Multi-AZ in Paris. This mentions
In a multi-AZ scenario, we achieve enhanced availability and data durability by distributing the workload into separate geographical zones that are at significant distance from each other. Increasing your infrastructure resilience to a level that would be unreachable with a single data center.
The documentation page about Kubernetes says
Currently it is available in our Paris (France), Amsterdam (Netherlands) and Warsaw (Poland) Availability Zones and supports at least the latest version of the last 3 major Kubernetes releases.
but it is not clear if
Also, I am not able to find any reference in documentation/ FAQ as how Multi-AZ can help higher availability for workloads or block/ object storage storage.
Being able to run Kubernetes worker nodes in multiple AZs helps us with graceful degradation of services should there the problems that affect one full zone. Also, having block/ object storage durability across two zones is another major improvement.
No response
https://www.scaleway.com/en/docs/tutorials/erpnext-13/
Install MariaDB
Installing the Frappe Framework
Getting production-ready
On Install MariaDB
section, I ran into an issue regarding 404 Not Found
Mirror Repository.
Error Output:
Ign:1 https://mariadb.mirror.liquidtelecom.com/repo/10.6/ubuntu focal/main arm64 mysql-common all 1:10.6.8+maria~focal
Ign:1 https://mariadb.mirror.liquidtelecom.com/repo/10.6/ubuntu focal/main arm64 mysql-common all 1:10.6.8+maria~focal
Err:1 https://mariadb.mirror.liquidtelecom.com/repo/10.6/ubuntu focal/main arm64 mysql-common all 1:10.6.8+maria~focal
404 Not Found [IP: 197.155.77.1 443]
E: Failed to fetch https://mariadb.mirror.liquidtelecom.com/repo/10.6/ubuntu/pool/main/m/mariadb-10.6/mysql-common_10.6.8+maria~focal_all.deb 404 Not Found [IP: 197.155.77.1 443]
E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?
Temporary solution: I replaced https://mariadb.mirror.liquidtelecom.com/repo/10.6/ubuntu
to https://mirrors.gigenet.com/mariadb/repo/10.6/ubuntu
On Installing the Frappe Framework
section, I ran into an issue regarding soft_unicode
and markupsafe
.
Error Output:
lance@erpnext-arm64v8:~$ bench init /home/lance/frappe-bench --frappe-path https://github.com/frappe/frappe --frappe-branch version-13 --python python3
Traceback (most recent call last):
File "/usr/local/bin/bench", line 11, in <module>
load_entry_point('bench', 'console_scripts', 'bench')()
File "/usr/local/lib/python3.8/dist-packages/pkg_resources/__init__.py", line 489, in load_entry_point
return get_distribution(dist).load_entry_point(group, name)
File "/usr/local/lib/python3.8/dist-packages/pkg_resources/__init__.py", line 2793, in load_entry_point
return ep.load()
File "/usr/local/lib/python3.8/dist-packages/pkg_resources/__init__.py", line 2411, in load
return self.resolve()
File "/usr/local/lib/python3.8/dist-packages/pkg_resources/__init__.py", line 2417, in resolve
module = __import__(self.module_name, fromlist=['__name__'], level=0)
File "/home/lance/.bench/bench/__init__.py", line 1, in <module>
from jinja2 import Environment, PackageLoader
File "/usr/local/lib/python3.8/dist-packages/jinja2/__init__.py", line 33, in <module>
from jinja2.environment import Environment, Template
File "/usr/local/lib/python3.8/dist-packages/jinja2/environment.py", line 15, in <module>
from jinja2 import nodes
File "/usr/local/lib/python3.8/dist-packages/jinja2/nodes.py", line 19, in <module>
from jinja2.utils import Markup
File "/usr/local/lib/python3.8/dist-packages/jinja2/utils.py", line 642, in <module>
from markupsafe import Markup, escape, soft_unicode
ImportError: cannot import name 'soft_unicode' from 'markupsafe' (/usr/local/lib/python3.8/dist-packages/markupsafe/__init__.py)
Temporary solution: I attempted to fix it by running this command: sudo pip3 install markupsafe==2.0.1
. Luckily, it worked.
On Getting production-ready
section, when I run the command sudo bench setup production <name>
, it ran into an issue regarding Jinja2
.
Error output:
lance@erpnext-arm64v8:~/frappe-bench$ sudo bench setup production lance
[sudo] password for lance:
$ sudo -H /usr/bin/python3 -m pip install ansible
/usr/local/lib/python3.8/dist-packages/cryptography/hazmat/backends/openssl/x509.py:14: CryptographyDeprecationWarning: This version of cryptography contains a temporary pyOpenSSL fallback path. Upgrade pyOpenSSL now.
warnings.warn(
Collecting ansible
Downloading ansible-6.0.0-py3-none-any.whl (40.3 MB)
|ββββββββββββββββββββββββββββββββ| 40.3 MB 9.2 MB/s
Collecting ansible-core~=2.13.0
Downloading ansible_core-2.13.1-py3-none-any.whl (2.1 MB)
|ββββββββββββββββββββββββββββββββ| 2.1 MB 35.8 MB/s
Collecting resolvelib<0.9.0,>=0.5.3
Downloading resolvelib-0.8.1-py2.py3-none-any.whl (16 kB)
Requirement already satisfied: PyYAML>=5.1 in /usr/lib/python3/dist-packages (from ansible-core~=2.13.0->ansible) (5.3.1)
Collecting jinja2>=3.0.0
Downloading Jinja2-3.1.2-py3-none-any.whl (133 kB)
|ββββββββββββββββββββββββββββββββ| 133 kB 26.1 MB/s
Requirement already satisfied: cryptography in /usr/local/lib/python3.8/dist-packages (from ansible-core~=2.13.0->ansible) (37.0.2)
Collecting packaging
Downloading packaging-21.3-py3-none-any.whl (40 kB)
|ββββββββββββββββββββββββββββββββ| 40 kB 6.7 MB/s
Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.8/dist-packages (from jinja2>=3.0.0->ansible-core~=2.13.0->ansible) (2.0.1)
Requirement already satisfied: cffi>=1.12 in /usr/local/lib/python3.8/dist-packages (from cryptography->ansible-core~=2.13.0->ansible) (1.15.0)
Collecting pyparsing!=3.0.5,>=2.0.2
Downloading pyparsing-3.0.9-py3-none-any.whl (98 kB)
|ββββββββββββββββββββββββββββββββ| 98 kB 7.5 MB/s
Requirement already satisfied: pycparser in /usr/local/lib/python3.8/dist-packages (from cffi>=1.12->cryptography->ansible-core~=2.13.0->ansible) (2.21)
ERROR: bench 5.0.0 has requirement Jinja2==2.10.3, but you'll have jinja2 3.1.2 which is incompatible.
Installing collected packages: resolvelib, jinja2, pyparsing, packaging, ansible-core, ansible
Attempting uninstall: jinja2
Found existing installation: Jinja2 2.10.3
Uninstalling Jinja2-2.10.3:
Successfully uninstalled Jinja2-2.10.3
Successfully installed ansible-6.0.0 ansible-core-2.13.1 jinja2-3.1.2 packaging-21.3 pyparsing-3.0.9 resolvelib-0.8.1
$ bench setup role fail2ban
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/pkg_resources/__init__.py", line 583, in _build_master
ws.require(__requires__)
File "/usr/local/lib/python3.8/dist-packages/pkg_resources/__init__.py", line 900, in require
needed = self.resolve(parse_requirements(requirements))
File "/usr/local/lib/python3.8/dist-packages/pkg_resources/__init__.py", line 791, in resolve
raise VersionConflict(dist, req).with_context(dependent_req)
pkg_resources.ContextualVersionConflict: (Jinja2 3.1.2 (/usr/local/lib/python3.8/dist-packages), Requirement.parse('Jinja2==2.10.3'), {'bench'})
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/bin/bench", line 6, in <module>
from pkg_resources import load_entry_point
File "/usr/local/lib/python3.8/dist-packages/pkg_resources/__init__.py", line 3192, in <module>
def _initialize_master_working_set():
File "/usr/local/lib/python3.8/dist-packages/pkg_resources/__init__.py", line 3175, in _call_aside
f(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/pkg_resources/__init__.py", line 3204, in _initialize_master_working_set
working_set = WorkingSet._build_master()
File "/usr/local/lib/python3.8/dist-packages/pkg_resources/__init__.py", line 585, in _build_master
return cls._build_from_requirements(__requires__)
File "/usr/local/lib/python3.8/dist-packages/pkg_resources/__init__.py", line 598, in _build_from_requirements
dists = ws.resolve(reqs, Environment())
File "/usr/local/lib/python3.8/dist-packages/pkg_resources/__init__.py", line 786, in resolve
raise DistributionNotFound(req, requirers)
pkg_resources.DistributionNotFound: The 'Jinja2==2.10.3' distribution was not found and is required by bench
$ bench setup role nginx
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/pkg_resources/__init__.py", line 583, in _build_master
ws.require(__requires__)
File "/usr/local/lib/python3.8/dist-packages/pkg_resources/__init__.py", line 900, in require
needed = self.resolve(parse_requirements(requirements))
File "/usr/local/lib/python3.8/dist-packages/pkg_resources/__init__.py", line 791, in resolve
raise VersionConflict(dist, req).with_context(dependent_req)
pkg_resources.ContextualVersionConflict: (Jinja2 3.1.2 (/usr/local/lib/python3.8/dist-packages), Requirement.parse('Jinja2==2.10.3'), {'bench'})
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/bin/bench", line 6, in <module>
from pkg_resources import load_entry_point
File "/usr/local/lib/python3.8/dist-packages/pkg_resources/__init__.py", line 3192, in <module>
def _initialize_master_working_set():
File "/usr/local/lib/python3.8/dist-packages/pkg_resources/__init__.py", line 3175, in _call_aside
f(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/pkg_resources/__init__.py", line 3204, in _initialize_master_working_set
working_set = WorkingSet._build_master()
File "/usr/local/lib/python3.8/dist-packages/pkg_resources/__init__.py", line 585, in _build_master
return cls._build_from_requirements(__requires__)
File "/usr/local/lib/python3.8/dist-packages/pkg_resources/__init__.py", line 598, in _build_from_requirements
dists = ws.resolve(reqs, Environment())
File "/usr/local/lib/python3.8/dist-packages/pkg_resources/__init__.py", line 786, in resolve
raise DistributionNotFound(req, requirers)
pkg_resources.DistributionNotFound: The 'Jinja2==2.10.3' distribution was not found and is required by bench
$ bench setup role supervisor
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/pkg_resources/__init__.py", line 583, in _build_master
ws.require(__requires__)
File "/usr/local/lib/python3.8/dist-packages/pkg_resources/__init__.py", line 900, in require
needed = self.resolve(parse_requirements(requirements))
File "/usr/local/lib/python3.8/dist-packages/pkg_resources/__init__.py", line 791, in resolve
raise VersionConflict(dist, req).with_context(dependent_req)
pkg_resources.ContextualVersionConflict: (Jinja2 3.1.2 (/usr/local/lib/python3.8/dist-packages), Requirement.parse('Jinja2==2.10.3'), {'bench'})
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/bin/bench", line 6, in <module>
from pkg_resources import load_entry_point
File "/usr/local/lib/python3.8/dist-packages/pkg_resources/__init__.py", line 3192, in <module>
def _initialize_master_working_set():
File "/usr/local/lib/python3.8/dist-packages/pkg_resources/__init__.py", line 3175, in _call_aside
f(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/pkg_resources/__init__.py", line 3204, in _initialize_master_working_set
working_set = WorkingSet._build_master()
File "/usr/local/lib/python3.8/dist-packages/pkg_resources/__init__.py", line 585, in _build_master
return cls._build_from_requirements(__requires__)
File "/usr/local/lib/python3.8/dist-packages/pkg_resources/__init__.py", line 598, in _build_from_requirements
dists = ws.resolve(reqs, Environment())
File "/usr/local/lib/python3.8/dist-packages/pkg_resources/__init__.py", line 786, in resolve
raise DistributionNotFound(req, requirers)
pkg_resources.DistributionNotFound: The 'Jinja2==2.10.3' distribution was not found and is required by bench
Port configuration list:
Site skintalk.prod assigned port: 80
Traceback (most recent call last):
File "/usr/local/bin/bench", line 11, in <module>
load_entry_point('bench', 'console_scripts', 'bench')()
File "/home/lance/.bench/bench/cli.py", line 41, in cli
bench_command()
File "/usr/lib/python3/dist-packages/click/core.py", line 764, in __call__
return self.main(*args, **kwargs)
File "/usr/lib/python3/dist-packages/click/core.py", line 717, in main
rv = self.invoke(ctx)
File "/usr/lib/python3/dist-packages/click/core.py", line 1137, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/lib/python3/dist-packages/click/core.py", line 1137, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/lib/python3/dist-packages/click/core.py", line 956, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/lib/python3/dist-packages/click/core.py", line 555, in invoke
return callback(*args, **kwargs)
File "/home/lance/.bench/bench/commands/setup.py", line 73, in setup_production
setup_production(user=user, yes=yes)
File "/home/lance/.bench/bench/config/production_setup.py", line 25, in setup_production
supervisor_conf = os.path.join(get_supervisor_confdir(), '{bench_name}.{extn}'.format(
File "/usr/lib/python3.8/posixpath.py", line 76, in join
a = os.fspath(a)
TypeError: expected str, bytes or os.PathLike object, not NoneType
No solution was found on Problem 3 - Getting production-ready
. I'm still stuck in this. Please help me. Thank you.
June 22, 2022
broke the documentation.https://www.scaleway.com/en/docs/tutorials/setup-postfix-ubuntu-bionic/
There is a typo within Sieve configuration at step 8 of the process. The targeted filepath is wrong, it should be changed.
7.
Create a global sieve filter in the file /var/vmail/mail/sieve/global/spam-global.sieve. It will move emails marked as spam directly to the spam folder:
...
8. Create a script, named /var/vmail/mail/sieve/global/spam-global.sieve, that will be triggered each time you manually move an email into the spam folder:
...
sievec /var/vmail/mail/sieve/global/spam-global.sieve
sievec /var/vmail/mail/sieve/global/report-spam.sieve <<<<<<<
sievec /var/vmail/mail/sieve/global/report-ham.sieve
chown -R vmail: /var/vmail/mail/sieve/
/var/vmail/mail/sieve/global/report-spam.sieve
.service dovecot restart
.
sievec
at step 10No response
No response
https://www.scaleway.com/en/docs/tutorials/kubeflow-on-kapsule/
Hello,
Trying to deploy kubeflow on kubernetes kapsule following this documentation and it seems it's not up to date.
As i'm trying to run kfctl apply -V -f ${CONFIG_URI}
it just crash:
WARN[0018] Will retry in 8 seconds. filename="kustomize/kustomize.go:285"
serviceaccount/application-controller-service-account unchanged
clusterrole.rbac.authorization.k8s.io/application-controller-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/application-controller-cluster-role-binding unchanged
service/application-controller-service unchanged
statefulset.apps/application-controller-stateful-set configured
WARN[0027] Encountered error applying application application: (kubeflow.error): Code 500 with message: Apply.Run : [unable to recognize "/tmp/kout006133288": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1", unable to recognize "/tmp/kout006133288": no matches for kind "Application" in version "app.k8s.io/v1beta1"] filename="kustomize/kustomize.go:284"
WARN[0027] Will retry in 18 seconds. filename="kustomize/kustomize.go:285"
serviceaccount/application-controller-service-account unchanged
clusterrole.rbac.authorization.k8s.io/application-controller-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/application-controller-cluster-role-binding unchanged
service/application-controller-service unchanged
statefulset.apps/application-controller-stateful-set configured
WARN[0046] Encountered error applying application application: (kubeflow.error): Code 500 with message: Apply.Run : [unable to recognize "/tmp/kout546473958": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1", unable to recognize "/tmp/kout546473958": no matches for kind "Application" in version "app.k8s.io/v1beta1"] filename="kustomize/kustomize.go:284"
WARN[0046] Will retry in 16 seconds. filename="kustomize/kustomize.go:285"```
### Additional context
_No response_
### Related PR(s)
_No response_
Following discussion on slack, it appears security groups do not apply to traffic incoming from private network.
I think this should be mentioned in the docs:
I expected the security groups to apply to all traffic including from private network.
As a firewall on a VM would work.
Discussing with colleagues, I'm probably not the only one with this expectation.
Yes
No response
Add documentation on how to setup kapsule cluster with ingress in Terraform (or in a way that can be automated).
Currently, documentation and blogs mostly reference the UI deployment process, and for people who are 'new' to scaleway (and maybe ingress), it would be a huge help to have either:
Most current documentation relies on setting up ingress controller via UI. However, even CLI and API documentation appears to be outphasing the managed ingress controllers.
Currently, terraform provider documentation for scaleway clusters only provide an example for nginx-ingress controller
It would be lovely to have an up to date snippet for provisioning kapsule clusters with terraform and adding traefik(2) ingress controller.
No
No response
No response
No response
https://www.scaleway.com/en/docs/storage/block/how-to/increase-block-volume/
This documentation use a block volume that is an additional volume.
It suggests unmounting the volume but it will not work if using a block volume as root_volume.
As most new instance types are now boot on block this should be updated with that in mind.
No response
No response
Yes
scaleway/serverless-examples#10
79eecd06-ae47-404c-bd9a-d793e62c06e1
This tutorial provides a step-by-step guide for deploying a containerized Laravel application on the Scaleway cloud platform. It covers the entire process, from setting up the required infrastructure to deploying and running the application using Docker and Scaleway services (i.e., Serverless Containers). The tutorial aims to help developers easily deploy their Laravel applications on Scaleway by providing clear instructions and best practices.
Overview:
This documentation is needed to facilitate the deployment of Laravel applications on the Scaleway cloud platform. By providing a detailed tutorial, developers can leverage containerization and Scaleway's infrastructure to deploy their applications quickly and efficiently. The tutorial covers essential topics, including infrastructure setup, containerization, deployment processes, etc. By following this guide, developers can save time and effort and ensure a smooth deployment experience for their Laravel applications on Scaleway.
Yes
e52e3a61-6e5f-4ba6-babf-0fe4500d1f32
No response
https://www.scaleway.com/en/docs/compute/containers/how-to/add-a-custom-domain-to-a-container/
Hello,
There is a list of steps to follow to be able to get the custom domain to a container.
When looking at the scaleway website the numbering of steps are 1 1 2
This makes the documentation confusing as I was interpreting this documentation as two different ways of doing things.
From the documentaion on github the numbering is correct.
https://github.com/scaleway/docs-content/blob/main/compute/containers/how-to/add-a-custom-domain-to-a-container.mdx
No response
No response
The documentation needs a consequent update.
Which I have done here: #1919
I think Scaleway should no longer recommend dhclient
as default DHCPv6 client for the Dedibox network.
dhclient
has reached EOL in 2022 and is discontinued. It creates a security risk for Scaleway users.
Scaleway should recommend systemd-networkd
as default network manager and DHCPv4/DHCPv6 client for the Dedibox network. systemd
is also natively installed on the Dedibox operating systems.
Yes
Not sure if this is an Organization ID?
Devopsy βΒ [email protected]
https://blog.scaleway.com/dbaas-behind-the-scenes/?_ga=2.165204476.21152594.1650533932-106614129.1650533932 does a great job explaining how it works.
One information point shall make it more useful: Are standby/ replica hosted in same data center where primary/ master is hosted?
May you please add this in FAQ for Databases please?
In the above docs link you've provided a way to access the s3 bucket. I've tried this and it works.
But when I do something similar for creating a container job that does ls
it either gives me
An error occurred (InvalidAccessKeyId) when calling the ListObjectsV2 operation: The AWS Access Key Id you provided does not exist in our records.
or
Unable to locate credentials. You can configure credentials by running "aws configure".
I am using k8s python api to create the container this is the code I've implemented.
client.V1Container(
name="scaleway-cli",
image="amazon/aws-cli:latest",
command=['bash','-c'],
args=[
"aws configure set plugins.endpoint awscli_plugin_endpoint && " +
"aws configure set s3.endpoint_url https://s3.fr-par.scw.cloud && " +
"aws configure set signature_version s3v4 && " +
"aws s3 ls && " +
f"aws s3 cp s3://myobjectstore-files/myfile.sql ."
],
env=[
client.V1EnvVar(
name="aws_access_key_id",
value_from=client.V1EnvVarSource(
secret_key_ref=client.V1SecretKeySelector(
name="scaleway-s3-secret",
key="accesskey"
)
)
),
client.V1EnvVar(
name="aws_secret_access_key",
value_from=client.V1EnvVarSource(
secret_key_ref=client.V1SecretKeySelector(
name="scaleway-s3-secret",
key="secretkey"
)
)
)
],
)
Therefore can you provide an example of a pod with a container that uses the amazon/aws-cli:latest image to do ls
or cp
.
I have to do a job that copies some files from object storage to my k8s container.
Adding k8s integration to this would make life easier.
Yes
No response
No response
https://www.scaleway.com/en/docs/serverless/functions/reference-content/code-examples/
os
).. This should be removed.There's zero mention that there is a hard cap of 1 bare metal M1 allowed per org. We just hit this after starting to use your services and hoping to scale out our usage to a few machines and it took a manual chat with customer service for us to learn it which left a super bad taste in our mouths as a new customer. This feels like it should be very clearly marked on your docs near the 24hour leasing limit.
No response
No response
https://www.scaleway.com/en/developers/api/kubernetes/
Step 5, the curl api call is obv. missing the host part (https://api.scaleway.com)
No response
No response
Sticky sessions on load-balancers are under documented, there are two modes of stickiness: table and cookie based.
I think those modes should be documented.
The current doc does not mention them as far as I can tell.
The file is here I think
https://github.com/scaleway/docs-content/blob/main/network/load-balancer/concepts.mdx#sticky-session
To be able to make an informed decision on what mode to use.
No
No response
No response
No response
A tutorial on how to setup a Matrix homeserver for secure communications on Scaleway Elements
I would base this off my own that you can find here: https://blog.facha.dev/how-to-self-host-matrix-and-element-docker-compose/
Didnt find this in scaleway tutorials, Matrix is one of the fast growing secure messaging platforms
Yes
No response
20413908-91f0-4a73-ab12-45d339b146be
No response
https://developers.scaleway.com/en/products/containers/api/#get-your-containers-logs
THe sample to retrieve container logs does not work:
curl -X GET -H "X-Auth-Token: $TOKEN" "https://api.scaleway.com/containers/v1beta1/regions/$REGION/logs?container_id=$CONTAINER_ID"
Instead using
curl -X GET -H "X-Auth-Token: $TOKEN" "https://api.scaleway.com/containers/v1beta1/regions/$REGION/containers/$CONTAINER_ID/logs"
works fine
No response
No response
Hi,
I've been having trouble getting a fresh VPC + Public Gateway + Instance (lUbuntu Focal image provided by scaleway) + ssh NAT rule working (same issue as https://www.scaleway.com/en/docs/network/vpc/troubleshooting/cant-connect-to-instance-with-pn-gateway).
After a few experiments, it turns out I need to apt update && apt upgrade
and then reboot
once before the instance properly joins the VPC with DHCP working.
Therefore it feels like VPC and public gateway is kind of broken out of the box right now.
As far as I can tell this is not specifically documented in the troubleshooting section or elsewhere in the documentation.
I think, it should be fixed or at the very least clearly documented somewhere.
If you accept PRs, I can submit a change to the troubleshooting section.
Hello,
When you create an object storage in the scaleway ui it gives you an endpoint address of the type https://<my-bucket>.s3.fr-par.scw.cloud
but s3 endpoint should look like https://s3.fr-par.scw.cloud/<my-bucket>
.
Where this url is coming from and why there is no valid s3 url instead?
Having a look to this documentation linked from the bucket page doesn't give much information about this. However the object-storage-aws-cli documentation uses https://s3.fr-par.scw.cloud/
whithout telling where it comes from.
Proposal:
Add a valid s3 address in the ui.
Link from the ui to the aws-cli documentation.
Regards
https://console.scaleway.com/iam
Il est indiquΓ© MIGRATE alors qu'attendu MIGRER
Tapez MIGRATE ci-dessous si vous souhaitez continuer :
οΏΌ
doit correspondre à l'expression régulière ^MIGRER$.
No response
No response
I try to create my own Certificate Authority but all what i tried doesn't work. I try to follow the documentation https://www.scaleway.com/en/docs/iot/iot-hub/how-to/provide-own-certificate-authority/ but it's not enough
More explanation on how to generate own Certificate Authority
No
No response
No response
Details on how to set up the Mac and connect to it from Visual Studio on PC for building .Net MAUI or Xamarin projects.
This is not working out of the box and requires several steps both in OS and software on the Mac.
Yes
No response
Org Coding Company UG
ID: 687ef48d-2b61-452d-964a-58f00d6a4425
In the page https://www.scaleway.com/en/docs/containers/kubernetes/api-cli/exposing-services/ it is not clear that we cannot expose UDP services, only TCP and HTTP.
On https://www.scaleway.com/en/docs/network/load-balancer/quickstart/ we see that
Available protocols are HTTP or TCP.
It would avoid unnecessary research for a feature that does not exist.
Yes
39926239-b22b-4d3e-b92a-9a0d16bb6c5d
No response
How to set up Apache with Let's Encrypt TLS certificate(s) managed using mod_md. In addition, the guide will also describe how to configure Apache for presently used TLS standards (1.2 and 1.3).
Unlike Certbot, this approach is built-in into Apache and doesn't require manually setting up a systemd timer that stops the web server and restarts it later so Certbot can renew certificates.
Yes
n/a
54fc9dcf-a8af-4ac8-90a4-b8a44e722162
Added a new SSH key to the project.
Tried accessing an already created instance using the new SSH key, but it doesn't work.
Tried accessing with SSH keys that were added before that instance was created and that worked.
Upon looking at the documentation: https://www.scaleway.com/en/docs/console/my-project/how-to/create-ssh-key/#how-to-upload-the-public-ssh-key-to-the-scaleway-interface
it mentions that I use
scw-fetch-ssh-keys --upgrade
This command does not work with scaleway cli
It seems this needs to be updated
It is needed because the old documentation does not work.
No
Hi,
This issue comes from a discussion on the community slack in the load-balancer channel
I think the doc at https://www.scaleway.com/en/docs/network/load-balancer/how-to/setup-ssl-offloading/#how-to-configure-tlsssl-bridging could state more clearly that SSL/TLS bridging only works with backend that bind to port 443.
Currently there is a misleading note that actually concerns only frontends which actually states the opposite (ports other than 443 may be used).
Maybe something along these lines would be clearer:
To configure SSL bridging:
1. Create a frontend listening on port 443 (It is possible to configure other ports, this is simply the traditional configuration.) with a certificate.
2. Create a backend listening on port 443 with TCP protocol. β οΈ currently SSL/TLS bridging can only be achieved with backends listening to port 443.
...
I think the documentation search engine could be slightly improved.
The rare times I used it, it couldn't find what I was looking for.
A simple example:
Facilitate search of documentation contents.
No
No response
No response
No response
https://www.scaleway.com/en/docs/tutorials/backup-dedicated-server-s3-duplicity/
duplicity 0.8.23 uses the boto3 library for s3 connections and requires a different sytax for the bucket name and two additional options for duplicity.
No response
A declarative, efficient, and flexible JavaScript library for building user interfaces.
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. πππ
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google β€οΈ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.