yteraoka / blog-1q77-com Goto Github PK
View Code? Open in Web Editor NEWHome Page: https://blog.1q77.com/
Home Page: https://blog.1q77.com/
お客様は、Google Workspace を AWS IAM Identity Center(AWSシングルサインオンの後継)に一度接続し、IAM Identity Center で AWS アカウントやアプリケーションへのアクセスを一元的に管理できるようになりました。この統合により、エンドユーザーは Google Workspace の ID を使用してサインインし、割り当てられたすべての AWS アカウントとアプリケーションにアクセスできるようになります。この統合により、管理者は複数のアカウントにまたがる AWS のアクセス管理を簡素化しながら、エンドユーザーがサインインする際に使い慣れた Google Workspace の体験を維持することができます。IAM Identity Center と Google Workspace は、Google 自動プロビジョニングを使用して、ユーザーを IAM Identity Center に安全にプロビジョニングし、管理時間を短縮します。
IAM Identity Center と Google Workspace の相互運用性により、管理者は AWS Organizations アカウントとアプリケーションにユーザーアクセスを一元的に割り当てることができます。この統合により、AWS 管理者は AWS へのアクセス管理を容易にし、Google Workspace ユーザーが適切な AWS アカウントにアクセスできるようにすることができます。Google の自動プロビジョニングと SAML(Security Assertion Markup Language) 接続を設定するために、管理者は Google の統合済みアプリカタログで利用できる AWS クラウドアプリケーションを使用することができます。ユーザーは、IAM Identity Center のユーザーポータルから、割り当てられたすべてのアカウントとアプリケーションにシングルクリックでアクセスできるようになります。ユーザーは、Google の認証情報を使用して、AWS 管理コンソール、AWS コマンドラインインターフェース(CLI)、Identity Center対応アプリケーションにサインインすることができます。
この機能は、IAM Identity Centerがサポートするすべてのリージョンで利用可能です。Google Workspace を外部IDプロバイダーとして IAM Identity Center に接続する方法、または詳細については、AWS IAM Identity Centerのドキュメントおよび AWS IAM Identity Centerユーザーガイドをご覧ください。
https://gist.github.com/gatanaso/cc5e65b99c5d7b8786e2d38d0bc1bb05
import java.net.http.*;
var client = HttpClient.newHttpClient();
var uri = new URI("https://www.google.com/");
var request = HttpRequest.newBuilder().uri(uri).build();
var response = client.send(request, HttpResponse.BodyHandlers.ofString());
System.out.println(response.body());
/exit
field が存在するかどうか
fields @timestamp, status
| filter !ispresent(status)
~/ghq/github.com/hashicorp/tfc-getting-started
$ terraform login [main]
Terraform will request an API token for app.terraform.io using your browser.
If login is successful, Terraform will store the token in plain text in
the following file for use by subsequent commands:
/home/teraoka/.terraform.d/credentials.tfrc.json
Do you want to proceed?
Only 'yes' will be accepted to confirm.
Enter a value: yes
---------------------------------------------------------------------------------
Terraform must now open a web browser to the tokens page for app.terraform.io.
If a browser does not open this automatically, open the following URL to proceed:
https://app.terraform.io/app/settings/tokens?source=terraform-login
---------------------------------------------------------------------------------
Generate a token using your browser, and copy-paste it into this prompt.
Terraform will store the token in plain text in the following file
for use by subsequent commands:
/home/teraoka/.terraform.d/credentials.tfrc.json
Token for app.terraform.io:
Enter a value: Opening in existing browser session.
Retrieved token for user teraoka
---------------------------------------------------------------------------------
-
----- -
--------- --
--------- - -----
--------- ------ -------
------- --------- ----------
---- ---------- ----------
-- ---------- ----------
Welcome to Terraform Cloud! - ---------- -------
--- ----- ---
Documentation: terraform.io/docs/cloud -------- -
----------
----------
---------
-----
-
New to TFC? Follow these steps to instantly apply an example configuration:
$ git clone https://github.com/hashicorp/tfc-getting-started.git
$ cd tfc-getting-started
$ scripts/setup.sh
_direnv_hook:2: no such file or directory: /home/linuxbrew/.linuxbrew/Cellar/direnv/2.32.2/bin/direnv
[23-05-21 23:07:05] teraoka@inspiron
~/ghq/github.com/hashicorp/tfc-getting-started
$ scripts/setup.sh [main]
--------------------------------------------------------------------------
Getting Started with Terraform Cloud
-------------------------------------------------------------------------
Terraform Cloud offers secure, easy-to-use remote state management and allows
you to run Terraform remotely in a controlled environment. Terraform Cloud runs
can be performed on demand or triggered automatically by various events.
This script will set up everything you need to get started. You'll be
applying some example infrastructure - for free - in less than a minute.
First, we'll do some setup and configure Terraform to use Terraform Cloud.
Press any key to continue (ctrl-c to quit):
Creating an organization and workspace...
Writing Terraform Cloud configuration to backend.tf...
========================================================================
Ready to go; the example configuration is set up to use Terraform Cloud!
An example workspace named 'getting-started' was created for you.
You can view this workspace in the Terraform Cloud UI here:
https://app.terraform.io/app/example-org-849606/workspaces/getting-started
Next, we'll run 'terraform init' to initialize the backend and providers:
$ terraform init
Press any key to continue (ctrl-c to quit):
Initializing Terraform Cloud...
Initializing provider plugins...
- Finding latest version of hashicorp/fakewebservices...
- Installing hashicorp/fakewebservices v0.2.3...
- Installed hashicorp/fakewebservices v0.2.3 (signed by HashiCorp)
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.
Terraform Cloud has been successfully initialized!
You may now begin working with Terraform Cloud. Try running "terraform plan" to
see any changes that are required for your infrastructure.
If you ever set or change modules or Terraform Settings, run "terraform init"
again to reinitialize your working directory.
...
========================================================================
Now it’s time for 'terraform plan', to see what changes Terraform will perform:
$ terraform plan
Press any key to continue (ctrl-c to quit):
Running plan in Terraform Cloud. Output will stream here. Pressing Ctrl-C
will stop streaming the logs, but will not stop the plan running remotely.
Preparing the remote plan...
To view this run in a browser, visit:
https://app.terraform.io/app/example-org-849606/getting-started/runs/run-nGJs8KtvFyPZG8kU
Waiting for the plan to start...
Terraform v1.4.6
on linux_amd64
Initializing plugins and modules...
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the
following symbols:
+ create
Terraform will perform the following actions:
# fakewebservices_database.prod_db will be created
+ resource "fakewebservices_database" "prod_db" {
+ id = (known after apply)
+ name = "Production DB"
+ size = 256
}
# fakewebservices_load_balancer.primary_lb will be created
+ resource "fakewebservices_load_balancer" "primary_lb" {
+ id = (known after apply)
+ name = "Primary Load Balancer"
+ servers = [
+ "Server 1",
+ "Server 2",
]
}
# fakewebservices_server.servers[0] will be created
+ resource "fakewebservices_server" "servers" {
+ id = (known after apply)
+ name = "Server 1"
+ type = "t2.micro"
+ vpc = "Primary VPC"
}
# fakewebservices_server.servers[1] will be created
+ resource "fakewebservices_server" "servers" {
+ id = (known after apply)
+ name = "Server 2"
+ type = "t2.micro"
+ vpc = "Primary VPC"
}
# fakewebservices_vpc.primary_vpc will be created
+ resource "fakewebservices_vpc" "primary_vpc" {
+ cidr_block = "0.0.0.0/1"
+ id = (known after apply)
+ name = "Primary VPC"
}
Plan: 5 to add, 0 to change, 0 to destroy.
------------------------------------------------------------------------
Cost Estimation:
Resources: 0 of 5 estimated
$0.0/mo +$0.0
...
========================================================================
The plan is complete!
This plan was initiated from your local machine, but executed within
Terraform Cloud!
Terraform Cloud runs Terraform on disposable virtual machines in
its own cloud infrastructure. This 'remote execution' helps provide consistency
and visibility for critical provisioning operations. It also enables notifications,
version control integration, and powerful features like Sentinel policy enforcement
and cost estimation (shown in the output above).
To actually make changes, we'll run 'terraform apply'. We'll also auto-approve
the result, since this is an example:
$ terraform apply -auto-approve
Press any key to continue (ctrl-c to quit):
Running apply in Terraform Cloud. Output will stream here. Pressing Ctrl-C
will cancel the remote apply if it's still pending. If the apply started it
will stop streaming the logs, but will not stop the apply running remotely.
Preparing the remote apply...
To view this run in a browser, visit:
https://app.terraform.io/app/example-org-849606/getting-started/runs/run-N6SWWJ5ctQ1GUb9u
Waiting for the plan to start...
Terraform v1.4.6
on linux_amd64
Initializing plugins and modules...
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the
following symbols:
+ create
Terraform will perform the following actions:
# fakewebservices_database.prod_db will be created
+ resource "fakewebservices_database" "prod_db" {
+ id = (known after apply)
+ name = "Production DB"
+ size = 256
}
# fakewebservices_load_balancer.primary_lb will be created
+ resource "fakewebservices_load_balancer" "primary_lb" {
+ id = (known after apply)
+ name = "Primary Load Balancer"
+ servers = [
+ "Server 1",
+ "Server 2",
]
}
# fakewebservices_server.servers[0] will be created
+ resource "fakewebservices_server" "servers" {
+ id = (known after apply)
+ name = "Server 1"
+ type = "t2.micro"
+ vpc = "Primary VPC"
}
# fakewebservices_server.servers[1] will be created
+ resource "fakewebservices_server" "servers" {
+ id = (known after apply)
+ name = "Server 2"
+ type = "t2.micro"
+ vpc = "Primary VPC"
}
# fakewebservices_vpc.primary_vpc will be created
+ resource "fakewebservices_vpc" "primary_vpc" {
+ cidr_block = "0.0.0.0/1"
+ id = (known after apply)
+ name = "Primary VPC"
}
Plan: 5 to add, 0 to change, 0 to destroy.
------------------------------------------------------------------------
Cost Estimation:
Resources: 0 of 5 estimated
$0.0/mo +$0.0
------------------------------------------------------------------------
fakewebservices_vpc.primary_vpc: Creating...
fakewebservices_database.prod_db: Creating...
fakewebservices_database.prod_db: Creation complete after 0s [id=fakedb-yDLxEPj4gCTKBgNZ]
fakewebservices_vpc.primary_vpc: Creation complete after 0s [id=fakevpc-R4WA32MFS65XocmL]
fakewebservices_server.servers[0]: Creating...
fakewebservices_server.servers[1]: Creating...
fakewebservices_server.servers[1]: Creation complete after 0s [id=fakeserver-XXQWunsdKFkz6m7d]
fakewebservices_server.servers[0]: Creation complete after 0s [id=fakeserver-GKFHCxRYNKmHnG6t]
fakewebservices_load_balancer.primary_lb: Creating...
fakewebservices_load_balancer.primary_lb: Creation complete after 1s [id=fakelb-ohma1mSbABs3dWnV]
Apply complete! Resources: 5 added, 0 changed, 0 destroyed.
...
========================================================================
You did it! You just provisioned infrastructure with Terraform Cloud!
The organization we created here has a 30-day free trial of the Team &
Governance tier features. After the trial ends, you'll be moved to the Free tier.
You now have:
* Workspaces for organizing your infrastructure. Terraform Cloud manages
infrastructure collections with workspaces instead of directories. You
can view your workspace here:
https://app.terraform.io/app/example-org-849606/workspaces/getting-started
* Remote state management, with the ability to share outputs across
workspaces. We've set up state management for you in your current
workspace, and you can reference state from other workspaces using
the 'terraform_remote_state' data source.
* Much more!
To see the mock infrastructure you just provisioned and continue exploring
Terraform Cloud, visit:
https://app.terraform.io/fake-web-services
_direnv_hook:2: no such file or directory: /home/linuxbrew/.linuxbrew/Cellar/direnv/2.32.2/bin/direnv
[23-05-21 23:09:50] teraoka@inspiron
~/ghq/github.com/hashicorp/tfc-getting-started
$ +[main]
Well-Known Labels, Annotations and Taints
Name | Description |
---|---|
kubernetes.io/ingress.allow-http | "true" or "false" |
kubernetes.io/ingress.class (deprecated) | gce |
kubernetes.io/ingress.global-static-ip-name | static ip name (google_compute_global_address) |
ingress.gcp.kubernetes.io/pre-shared-cert | managed certificate name |
Summary of external Ingress annotations
Name | Description |
---|---|
cloud.google.com/app-protocols | LB と backend との間の protocol を JSON で指定する '{"my-https-port":"HTTPS","my-http-port":"HTTP"}' |
cloud.google.com/backend-config | BackendConfig の name を指定 |
cloud.google.com/neg | 値は JSON で指定する {"ingress": true} port が複数あれば、port 毎に設定も可能 |
port 毎に NEG を指定する方法 (NEG は存在しなければ作成される)
{
"exposed_ports": {
"80": {
"name": "http-neg"
},
"443": {
"name": "https-neg"
}
}
}
Service annotations related to Ingress
Name | Description |
---|---|
cluster-autoscaler.kubernetes.io/safe-to-evict | "true" にすることで Cluster Autoscaler にこの Pod は evict しても安全だよと知らせる |
emptyDir とか hostPath で volume mount していると、デフォルトでは Cluster Autoscaler が evict してくれないので問題ない場合は safe-to-evict を "true"
にしておくと良い。
全ての Pod を対象とするなら Cluster Autoscaler 側の設定 で指定可能。
Name | Description |
---|---|
cluster-autoscaler.kubernetes.io/scale-down-disabled | "true" にしておくと Cluster Autoscaler での scale down 対象から外される |
家の自分用 Laptop はずっと Linux を使ってきましたが、数か月前に Inspiron 14 に買い替えたタイミングで Ubuntu 22.04 にしてからやっぱり不便だなあとも思っていました。
一部のお仕事で使っている Windows 10 では WSL2 を利用しているのですが、この Terminal が好きになれませんでした、そこで他に選択肢はないのだろうかと検索してみたら WezTerm というのが良いらしいというのを見つけました。
軽く試してみたところ非常に良い感じでしたので、家の Inspiron もプリインストールの Windows 11 に戻すことにしました。
https://wezfurlong.org/wezterm/install/windows.html#installing-on-windows
winget install wez.wezterm
https://learn.microsoft.com/ja-jp/windows/package-manager/winget/
https://github.com/microsoft/winget-cl
https://learn.microsoft.com/ja-jp/windows/wsl/install
wsl --install
PS C:\Users\ytera> wsl --list --online
インストールできる有効なディストリビューションの一覧を次に示します。
既定の分布は ' * ' で表されます。
'wsl --install -d <Distro>'を使用してインストールします。
NAME FRIENDLY NAME
* Ubuntu Ubuntu
Debian Debian GNU/Linux
kali-linux Kali Linux Rolling
Ubuntu-18.04 Ubuntu 18.04 LTS
Ubuntu-20.04 Ubuntu 20.04 LTS
Ubuntu-22.04 Ubuntu 22.04 LTS
OracleLinux_7_9 Oracle Linux 7.9
OracleLinux_8_7 Oracle Linux 8.7
OracleLinux_9_1 Oracle Linux 9.1
openSUSE-Leap-15.5 openSUSE Leap 15.5
SUSE-Linux-Enterprise-Server-15-SP4 SUSE Linux Enterprise Server 15 SP4
SUSE-Linux-Enterprise-15-SP5 SUSE Linux Enterprise 15 SP5
openSUSE-Tumbleweed openSUSE Tumbleweed
PS C:\Users\ytera>
VS Code のインストール
winget install vscode
--scope machine
, --scope user
でインストール先(モード?)を指定可能。デフォルトは user らしい。
CapsLock を Ctrl に変更する
https://news.mynavi.jp/techplus/article/20210609-1900755/
PS C:\Users\ytera> winget install powertoys
複数のパッケージが入力条件に一致しました。入力内容を修正してください。
名前 ID ソース
-----------------------------------------------
Microsoft PowerToys XP89DCGQ3K6VLD msstore
PowerToys (Preview) Microsoft.PowerToys winget
winget install --source msstore powertoys
chgkey を使いたいが LHZ なので 7-zip をインストールする
winget install 7zip.7zip
WSL2 (Ubuntu) の shell を zsh に変更する(お仕事端末が mac なのでそろえる)
sudo apt install -y zsh
chsh -s /usr/bin/zsh
Linuxbrew のインストール
サイトのドキュメント通り
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
brew install awscli rtx fzf ghq hugo hadolint trivy checkov
sudo apt install -y jq
gcloud コマンドのインストール
curl -O https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-cli-440.0.0-linux-x86_64.tar.gz
tar xv google-cloud-cli-440.0.0-linux-x86_64.tar.gz
./google-cloud-sdk/install.sh
rtx install ecspresso 2.2.3
rtx global ecspresso 2.2.3
rtx install kubectl
rtx global kubectl 1.27.4
rtx install golang
rtx global golang 1.21.0
docker のインストール(Windows コンテナを使う予定はないので Docker Desktop for Windows ではなくて Ubuntu 内にインストールする)
https://docs.docker.com/engine/install/ubuntu/
sudo apt-get update
sudo apt-get install ca-certificates curl gnupg
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
echo \
"deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
"$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
自分を docker Group に追加する
sudo usermod -a -G docker $USER
playwright の実行
╔════════════════════════════════════════════════════════════╗
║ Looks like Playwright was just installed or updated. ║
║ Please run the following command to download new browsers: ║
║ ║
║ playwright install ║
║ ║
║ <3 Playwright Team ║
╚════════════════════════════════════════════════════════════╝
╔══════════════════════════════════════════════════════╗
║ Host system is missing dependencies to run browsers. ║
║ Please install them with the following command: ║
║ ║
║ sudo playwright install-deps ║
║ ║
║ Alternatively, use apt: ║
║ sudo apt-get install libnss3\ ║
║ libnspr4\ ║
║ libatk1.0-0\ ║
║ libatk-bridge2.0-0\ ║
║ libcups2\ ║
║ libatspi2.0-0\ ║
║ libxcomposite1\ ║
║ libxdamage1\ ║
║ libxfixes3\ ║
║ libxrandr2\ ║
║ libgbm1\ ║
║ libxkbcommon0\ ║
║ libpango-1.0-0\ ║
║ libcairo2\ ║
║ libasound2 ║
║ ║
║ <3 Playwright Team ║
╚══════════════════════════════════════════════════════╝
日本語ファイルが正しく表示されない
$ locale -a
C
C.utf8
POSIX
sudo apt install language-pack-ja
$ locale -a
C
C.utf8
POSIX
ja_JP.utf8
LANG=ja_JP.utf8 ls
で日本語も正しく表示できる
brew install ttyrec ttygif
brew install t-rec
https://python-poetry.org/docs/
poetry config はデフォルトでは global 設定で --local
をつければその project だけの設定になる。
global 設定の設定ファイルの場所は
https://python-poetry.org/docs/configuration/#config-directory
$XDG_CONFIG_HOME/pypoetry
or ~/.config/pypoetry
%APPDATA%\pypoetry
~/Library/Preferences/pypoetry
環境変数 POETRY_CONFIG_DIR
で上書き可能
この directory 内の config.toml
に書かれる。
poetry config --list
で現在の設定を確認することができる。
This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.
This repository currently has no open or pending branches.
lint/Dockerfile
.github/workflows/build-only.yml
actions/checkout v4
.github/workflows/pages.yml
actions/checkout v4
actions/configure-pages v5
actions/upload-pages-artifact v3
actions/deploy-pages v4
値だけ取得したい
mysql -h 127.0.0.1 -u user -D dbname -s -N -e "select name from mytable"
-s
, --silent
-N
, --skip-column-names
複数 column 取得する場合 tab 区切りになる
package に含まれるファイルの確認 (インストール済み)
-L
/ --listfiles
dpkg -L <package_name>
package に含まれるファイルの確認 (インストール前) deb ファイルの中身を確認
-c
or --contents
dpkg -c <package_name>.deb
package に含まれるファイルの確認 apt-file を使う
apt install apt-file
apt-file update
apt-file list <package_name>
インストールされているファイルがどの package のものなのかを探す
path や file の一部でも full path でも可
dpkg -S <pattern>
deb ファイルを download する
apt-get --download-only install <package_name>
/var/cache/apt/archives/
に保存される
近年は CSR を手で作成することがなくなっているけど、稀に必要になる
https://github.com/runfinch/finch
$ brew install --cask finch
==> Downloading https://github.com/runfinch/finch/releases/download/v0.3.0/Finch-v0.3.0-x86_64.pkg
==> Downloading from https://objects.githubusercontent.com/github-production-release-asset-2e65be/562778457/d7a83d53-f92
######################################################################## 100.0%
==> Installing Cask finch
==> Running installer for finch; your password may be necessary.
Package installers may write to any location; options such as `--appdir` are ignored.
Password:
installer: Package name is Finch
installer: Installing at base path /
installer: The install was successful.
🍺 finch was successfully installed!
$ finch vm init
INFO[0000] Using default values due to missing config file at "/Users/teraoka/.finch/finch.yaml"
INFO[0000] "/Users/teraoka/.finch" directory doesn't exist, attempting to create it
INFO[0000] binaries directory doesn't exist
INFO[0000] Requesting root access to finish network dependency configuration
Password:
INFO[0003] sudoers file not found: open /etc/sudoers.d/finch-lima: no such file or directory
INFO[0004] Initializing and starting Finch virtual machine...
33.59 MiB / 224.27 MiB (14.98%) ? p/s
67.83 MiB / 224.27 MiB (30.24%) 6.85 MiB/s
100.00 MiB / 224.27 MiB (44.59%) 6.82 MiB/s
121.59 MiB / 224.27 MiB (54.22%) 6.66 MiB/s
145.00 MiB / 224.27 MiB (64.66%) 6.53 MiB/s
167.66 MiB / 224.27 MiB (74.76%) 6.40 MiB/s
194.14 MiB / 224.27 MiB (86.57%) 6.33 MiB/s
224.00 MiB / 224.27 MiB (99.88%) 6.31 MiB/s
224.27 MiB / 224.27 MiB (100.00%) 6.39 MiB/stime="2023-02-15T10:32:47+09:00" level=info msg="Downloaded the nerdctl archive from \"https://github.com/containerd/nerdctl/releases/download/v1.1.0/nerdctl-full-1.1.0-linux-amd64.tar.gz\""
time="2023-02-15T10:32:49+09:00" level=info msg="[hostagent] Mounting disk \"finch\" on \"/mnt/lima-finch\""
time="2023-02-15T10:32:49+09:00" level=info msg="[hostagent] Starting QEMU (hint: to watch the boot progress, see \"/Applications/Finch/lima/data/finch/serial.log\")"
time="2023-02-15T10:32:49+09:00" level=info msg="SSH Local Port: 52876"
time="2023-02-15T10:32:50+09:00" level=info msg="[hostagent] Waiting for the essential requirement 1 of 5: \"ssh\""
time="2023-02-15T10:33:08+09:00" level=info msg="[hostagent] The essential requirement 1 of 5 is satisfied"
time="2023-02-15T10:33:08+09:00" level=info msg="[hostagent] Waiting for the essential requirement 2 of 5: \"user session is ready for ssh\""
time="2023-02-15T10:33:08+09:00" level=info msg="[hostagent] The essential requirement 2 of 5 is satisfied"
time="2023-02-15T10:33:08+09:00" level=info msg="[hostagent] Waiting for the essential requirement 3 of 5: \"sshfs binary to be installed\""
time="2023-02-15T10:33:48+09:00" level=info msg="[hostagent] Waiting for the essential requirement 3 of 5: \"sshfs binary to be installed\""
time="2023-02-15T10:34:28+09:00" level=info msg="[hostagent] Waiting for the essential requirement 3 of 5: \"sshfs binary to be installed\""
time="2023-02-15T10:35:08+09:00" level=info msg="[hostagent] Waiting for the essential requirement 3 of 5: \"sshfs binary to be installed\""
time="2023-02-15T10:35:48+09:00" level=info msg="[hostagent] Waiting for the essential requirement 3 of 5: \"sshfs binary to be installed\""
time="2023-02-15T10:36:29+09:00" level=info msg="[hostagent] Waiting for the essential requirement 3 of 5: \"sshfs binary to be installed\""
time="2023-02-15T10:37:09+09:00" level=info msg="[hostagent] Waiting for the essential requirement 3 of 5: \"sshfs binary to be installed\""
time="2023-02-15T10:37:49+09:00" level=info msg="[hostagent] Waiting for the essential requirement 3 of 5: \"sshfs binary to be installed\""
time="2023-02-15T10:38:29+09:00" level=info msg="[hostagent] Waiting for the essential requirement 3 of 5: \"sshfs binary to be installed\""
time="2023-02-15T10:39:09+09:00" level=info msg="[hostagent] Waiting for the essential requirement 3 of 5: \"sshfs binary to be installed\""
time="2023-02-15T10:39:49+09:00" level=info msg="[hostagent] Waiting for the essential requirement 3 of 5: \"sshfs binary to be installed\""
time="2023-02-15T10:40:29+09:00" level=info msg="[hostagent] Waiting for the essential requirement 3 of 5: \"sshfs binary to be installed\""
time="2023-02-15T10:41:09+09:00" level=info msg="[hostagent] Waiting for the essential requirement 3 of 5: \"sshfs binary to be installed\""
time="2023-02-15T10:41:50+09:00" level=info msg="[hostagent] Waiting for the essential requirement 3 of 5: \"sshfs binary to be installed\""
time="2023-02-15T10:42:30+09:00" level=info msg="[hostagent] Waiting for the essential requirement 3 of 5: \"sshfs binary to be installed\""
time="2023-02-15T10:42:47+09:00" level=fatal msg="did not receive an event with the \"running\" status"
FATA[0647] exit status 1
network 周りが何かおかしい
[teraoka@lima-finch teraoka]$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 52:55:55:18:72:d3 brd ff:ff:ff:ff:ff:ff
altname enp0s1
inet 192.168.5.15/24 brd 192.168.5.255 scope global dynamic noprefixroute eth0
valid_lft 85730sec preferred_lft 85730sec
inet6 fec0::f2fa:45f7:6a48:4b35/64 scope site dynamic noprefixroute
valid_lft 86363sec preferred_lft 14363sec
inet6 fe80::8044:2704:3d1a:737e/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: lima0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 52:55:55:c8:63:3f brd ff:ff:ff:ff:ff:ff
altname enp0s2
inet 192.168.105.2/24 brd 192.168.105.255 scope global dynamic noprefixroute lima0
valid_lft 85730sec preferred_lft 85730sec
inet6 fdc4:8c61:16fc:b74e:c424:a3a2:1a12:e097/64 scope global dynamic noprefixroute
valid_lft 2591998sec preferred_lft 604798sec
inet6 fe80::6cc5:ff98:802a:162/64 scope link noprefixroute
valid_lft forever preferred_lft forever
[teraoka@lima-finch teraoka]$ ip r
default via 192.168.5.2 dev eth0 proto dhcp src 192.168.5.15 metric 100
default via 192.168.105.1 dev lima0 proto dhcp src 192.168.105.2 metric 101
192.168.5.0/24 dev eth0 proto kernel scope link src 192.168.5.15 metric 100
192.168.105.0/24 dev lima0 proto kernel scope link src 192.168.105.2 metric 101
PPP
# VM Config
`${HOME}/.finch/finch.yaml`
example
```yaml
# CPUs: the amount of vCPU to dedicate to the virtual machine. (required)
cpus: 4
# Memory: the amount of memory to dedicate to the virtual machine. (required)
memory: 4GiB
# AdditionalDirectories: the work directories that are not supported by default. In macOS, only home directory is supported by default.
# For example, if you want to mount a directory into a container, and that directory is not under your home directory,
# then you'll need to specify this field to add that directory or any ascendant of it as a work directory. (optional)
additional_directories:
# the path of each additional directory.
- path: /Volumes
$ cat ~/.finch/finch.yaml
cpus: 3
memory: 4GiB
LIMA の VM に入る
LIMA_HOME=/Applications/Finch/lima/data /Applications/Finch/lima/bin/limactl shell finch
https://oauth2-proxy.github.io/oauth2-proxy/
./oauth2-proxy \
--provider=github \
--github-user=yteraoka \
--client-id=${GITHUB_OAUTH_APP_CLIENT_ID} \
--client-secret=${GITHUB_OAUTH_APP_CLIENT_SECRET} \
--cookie-secret=${COOKIE_SECRET} \
--email-domain=\* \
--http-address=0.0.0.0:8080 \
--upstream=http://127.0.0.1:8000 \
--redirect-url=http://localhost:8080/oauth2/callback \
--scope=user:email \
--cookie-secure=false
cookie-secret はランダムな文字列で 16, 24, 32 bytes のいずれかの長さで指定する。
https://oxyno-zeta.github.io/s3-proxy/
log:
level: info
format: json
server:
listenAddr: 127.0.0.1
port: 8000
timeouts:
readTimeout: 5s
readHeaderTimeout: 10s
writeTimeout: 60s
idleTimeout: 10s
compress:
enabled: false
targets:
first-bucket:
mount:
path:
- /
actions:
GET:
enabled: true
config:
indexDocument: index.html
disableListing: true
PUT:
enabled: false
DELETE:
enabled: false
bucket:
name: ${S3_BUCKET_NAME}
prefix: ""
region: ap-northeast-1
disableSSL: false
特殊な事情で既存の秘密鍵を使って証明書を発行しなくてはならない場合の方法
CSR を作ってそれを指定する
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.