Giter Club home page Giter Club logo

forward's Introduction

forward

What is this?

Forward sets up an sbatch script on your cluster resource and port forwards it back to your local machine! Useful for jupyter notebook and tensorboard, amongst other things.

  • start.sh is intended for submitting a job and setting up ssh forwarding
  • start-node.sh will submit the job and give you a command to ssh to the node, without port forwarding

The folder sbatches contains scripts, organized by cluster resource, that are intended for use and submission. It's up to you to decide if you want a port forwarded (e.g., for a jupyter notebook) or just an instruction for how to connect to a running node with your application.

Tiny Tutorials

Here we will provide some "tiny tutorials" to go along with helping to use the software. These are tiny because there are many possible use cases!

Setup

For interested users, a few tutorials are provided on the Research Computing Lessons site. Brief instructions are also documented in this README.

For farmshare - please navigate to the README located in sbatches/farmshare/README.md.

Clone the Repository

Clone this repository to your local machine.

You will then need to create a parameter file. To do so, follow the prompts at:

bash setup.sh

You can always edit params.sh later to change these configuration options.

Parameters

  • RESOURCE should refer to an identifier for your cluster resource that will be recorded in your ssh configuration, and then referenced in the scripts to interact with the resource (e.g., ssh sherlock).
  • PARTITION If you intend to use a GPU (e.g., sbatches/py2-tensorflow.sbatch the name of the PARTITION variable should be "gpu."
  • CONTAINERSHARE (optional) is a location on your cluster resource (typically world readable) where you might find containers (named by a hash of the container name in the library that are ready to go! If you are at Stanford, leave this to be default. If you aren't, then ask your cluster admin about setting up a containershare
  • CONNECTION_WAIT_SECONDS refers to how many seconds the start.sh script waits before setting up port forwarding. If your cluster runs slow, or is particularly busy, this can be set at 30.

If you want to modify the partition flag to have a different gpu setup (other than --partition gpu --gres gpu:1) then you should set this entire string for the partition variable.

SSH config

You will also need to at the minimum configure your ssh to recognize your cluster (e.g., sherlock) as a valid host. We have provided a hosts folder for helper scripts that will generate recommended ssh configuration snippets to put in your ~/.ssh/config file. Based on the name of the folder, you can intuit that the configuration depends on the cluster host. Here is how you can generate this configuration for Sherlock:

bash hosts/sherlock_ssh.sh
Host sherlock
    User put_your_username_here
    Hostname sh-ln01.stanford.edu
    GSSAPIDelegateCredentials yes
    GSSAPIAuthentication yes
    ControlMaster auto
    ControlPersist yes
    ControlPath ~/.ssh/%l%r@%h:%p

Using these options can reduce the number of times you need to authenticate. If you don't have a file in the location ~/.ssh/config then you can generate it programatically:

bash hosts/sherlock_ssh.sh >> ~/.ssh/config

Do not run this command if there is content in the file that you might overwrite! One downside is that you will be foregoing sherlock's load balancing since you need to be connecting to the same login machine at each step.

SSH Port Forwarding Considerations

Depending on your cluster, you will need to identify whether the compute nodes (not the login nodes) are isolated from the outside world or not (i.e can be ssh'd into directly). For Sherlock, they are isolated. For FarmShare they are not. This is important when we are setting up the ssh command to port forward from the local machine to the compute node.

For HPC's where the compute node is isolated from the outside world (as is the case with sherlock), the ssh command basically establishes a tunnel to the login node, and then from the login node establishes another tunnel to the compute node. In this case we write a command where we port forward to the login node, and then the compute node, which is accessible from the login node. The entire command might look like this:

$ ssh -L $PORT:localhost:$PORT ${RESOURCE} ssh -L $PORT:localhost:$PORT -N "$MACHINE" &

In the command above, the first half is executed on the local machine ssh -L $PORT:localhost:$PORT ${RESOURCE}, which establishes a port forwarding to the login node. The next line ssh -L $PORT:localhost:$PORT -N "$MACHINE" & is run from the login node, and port forwards it to the compute node, since you can only access the compute node from the login nodes.

For HPC's where the compute node is not isolated from the outside world (as is the case with Farmshare) the ssh command for port forwarding first establishes a connection the login node, but then continues to pass on the login credentials to the compute node to establish a tunnel between the localhost and the port on the compute node. The ssh command in this case utilizes the flag -K which forwards the login credentials to the compute node:

$ ssh "$DOMAINNAME" -l $FORWARD_USERNAME -K -L  $PORT:$MACHINE:$PORT -N  &

The drawback of this method is that when the start.sh script is run, you will have to authenticate twice (once at the beginning to check if a job is running on the HPC, and when the port forwarding is setup). This is the case for FarmShare.

In the setup.sh file, we have added an option $ISOLATECOMPUTENODE, which is a boolean operator. For users of FarmShare, and Sherlock, this value is set automatically. For your own default cluster, you will be prompted whether the compute node is isolated or not, please write true or false (case sensitive) for your resource depending on its properties. You may have to consult the documentation or ask the HPC manager.

Notebooks

Notebooks have associated sbatch scripts that are intended to start a jupyter (or similar) notebook, and then forward the port back to your machine. If you just want to submit a job, (without port forwarding) see the job submission section. For notebook job submission, you will want to use the start.sh script.

Notebook password

If you have not set up notebook authentication before, you will need to set a password via jupyter notebook password on your cluster resource.
Make sure to pick a secure password!

Job Submission

Job submission can mean executing a command to a container, running a container, or writing your own sbatch script (and submitting from your local machine). For standard job submission, you will want to use the start-node.sh script. If your cluster has a containershare, you can use the containershare-notebook set of scripts to have a faster deployment (without needing to pull).

Usage

# Choose a containershare notebook, and launch it! On Sherlock, the containers are already in the share
bash start.sh sherlock/containershare-notebook docker://vanessa/repo2docker-julia

# Run a Singularity container that already exists on your resource (recommended)
bash start-node.sh singularity-run /scratch/users/vsochat/share/pytorch-dev.simg

# Execute a custom command to the same Singularity container
bash start-node.sh singularity-exec /scratch/users/vsochat/share/pytorch-dev.simg echo "Hello World"

# Run a Singularity container from a url, `docker://ubuntu`
bash start-node.sh singularity-run docker://ubuntu

# Execute a custom command to the same container
bash start-node.sh singularity-exec docker://ubuntu echo "Hello World"

# Execute your own custom sbatch script
cp myscript.job sbatches/
bash start-node.sh myscript

As a service for Stanford users, @vsoch provides a containershare of ready to go containers to use on Sherlock! The majority of these deploy interactive notebooks, however can also be run without (use start-node.sh instead of start.sh). If you want to build your own container for containershare (or request a container) see the README in the repository that serves it.

# Run a containershare container with a notebook
bash start.sh sherlock/containershare-notebook docker://vanessa/repo2docker-julia

If you would like to request a custom notebook, please reach out.

Usage

# To start a jupyter notebook in a specific directory ON the cluster resource
bash start.sh jupyter <cluster-dir>

# If you don't specify a path on the cluster, it defaults to your ${SCRATCH}
bash start.sh jupyter /scratch/users/<username>

# To start a jupyter notebook with tensorflow in a specific directory
bash start.sh py2-tensorflow <cluster-dir>

# If you want a GPU node, make sure your partition is set to "gpu."
# To start a jupyter notebook (via a Singularity container!) in a specific directory
bash start.sh singularity-jupyter <cluster-dir>

Want to create your own Singularity jupyter container? Use repo2docker and then specify the container URI at the end.

bash start.sh singularity.jupyter <cluster-dir> <container>

# You can also run a general singularity container!
bash start.sh singularity <cluster-dir> <container>

# To start tensorboard in a specific directory (careful here and not recommended, as is not password protected)
bash start.sh start <cluster-dir>

# To stop the running jupyter notebook server
bash end.sh jupyter

If the sbatch job is still running, but your port forwarding stopped (e.g. if your computer went to sleep), you can resume with:

bash resume.sh jupyter`

Debugging

Along with some good debugging notes here, common errors are below.

Connection refused after start.sh finished

Sometimes you can get connection refused messages after the script has started up. Just wait up to a minute and then refresh the opened web page, and this should fix the issue.

Terminal Hangs when after start.sh

Sometimes when you have changes in your network, you would need to reauthenticate. In the same way you might get a login issue here, usually opening a new shell resolves the hangup.

Terminal Hangs on "== Checking for previous notebook =="

This is the same bug as above - this command specifically is capturing output into a variable, so if it hangs longer than 5-10 seconds, it's likely hit the password prompt and would hang indefinitely. If you issue a standard command that will re-prompt for your password in the terminal session, you should fix the issue.

$ ssh sherlock pwd

slurm_load_jobs error: Socket timed out on send/recv operation

This error is basically saying something to the effect of "slurm is busy, try again later." It's not an issue with submitting the job, but rather a ping to slurm to perform the check. In the case that the next ping continues, you should be ok. However, if the script is terminate, while you can't control the "busyness" of slurm, you can control how likely it is to be allocated a node, or the frequency of checking. Thus, you can do either of the following to mitigate this issue:

choose a partition that is more readily available

In your params.sh file, choose a partition that is likely to be allocated sooner, thus reducing the queries to slurm, and the chance of the error.

offset the checks by changing the timeout between attempts

The script looks for an exported variable, TIMEOUT and sets it to be 1 (1 second) if not defined. Thus, to change the timeout, you can export this variable:

export TIMEOUT=3

While the forward tool cannot control the busyness of slurm, these two strategies should help a bit.

mux_client_forward: forwarding request failed: Port forwarding failed

Similarly, if your cluster is slow, you may get this error after "== Setting up port forwarding ==". To fix this, increase your CONNECTION_WAIT_SECONDS.

I ended a script, but can't start

As you would kill a job on Sherlock and see some delay for the node to come down, the same can be try here! Try waiting 20-30 seconds to give the node time to exit, and try again.

How do I contribute?

First, please read the contributing docs. Generally, you will want to:

  • fork the repository to your username
  • clone your fork
  • checkout a new branch for your feature, commit and push
  • add your name to the CONTRIBUTORS.md
  • issue a pull request!

Adding new sbatch scripts

You can add more sbatch scripts by putting them in the sbatches directory.

forward's People

Contributors

akkornel avatar eigenstate avatar mckenziephagen avatar neutrinonerd3333 avatar raphtown avatar salcc avatar sohams-mass avatar vsoch avatar zqfang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

forward's Issues

slow sherlock response leads to port forwarding failure

In the past day or so, me and a couple colleagues have been having a difficult time connecting to Sherlock using forward. when the script gets to the setup_port_forwarding, it fails with this error mux_client_forward: forwarding request failed: Port forwarding failed muxclient: master forward request failed or this error Access denied by pam_slurm_adopt: you have no active jobs on this node Authentication failed.

I believe this is because Sherlock seems to be connecting slowly, meaning that the port isn't ready when the script gets to that line. Adding sleep 30 to the start.sh script right before setup_port_forwarding seems to have fixed it.

Should this be a part of the main script, or could it be added to the Debugging part of the read me? It's not always necessary, just when Sherlock is "acting up", and it does add time to the setup process, which isn't ideal for when Sherlock isn't lagging.

resume.sh issue with ssh usage

After attempting to resume a session using

$ bash resume.sh sherlock/py3-jupyter

I am prompted for my password and after two-step verification this appears:

$ ssh [-1246AaCfGgKkMNnqsTtVvXxYy] [-b bind_address] [-c cipher_spec]
           [-D [bind_address:]port] [-E log_file] [-e escape_char]
           [-F configfile] [-I pkcs11] [-i identity_file]
           [-J [user@]host[:port]] [-L address] [-l login_name] [-m mac_spec]
           [-O ctl_cmd] [-o option] [-p port] [-Q query_option] [-R address]
           [-S ctl_path] [-W host:port] [-w local_tun[:remote_tun]]
           [user@]hostname [command]

then prompts me for an input.

Add input argument to control custom python envs

First off, this is great and much better than what I had made for myself.

Second, something that I thought was nice was the option to specify a conda environment to activate before running the notebook. I've implemented something basic, but I don't know it is something that is good for general use.

Allow for hosts other than sherlock

The proposed addition of the hosts directory in #8 is making me think we could pretty straightforwardly add the ability to use hosts other than sherlock. This would make this utility applicable to any cluster using the SLURM scheduler.

default partition

Feature request: I'm a Sherlock nube but was immediately drawn to this repo because I use Jupyter a lot. I got Jupyter working on Sherlock by using these scripts, which is awesome. (Thank you!) The only real hiccup was that it took me a long time to find out that I should set my default partition to normal, instead of rdror or drorlab or whatever the default value is. Would there be a way to change the default, or at least print out that normal is an option for users whose PIs don't buy a dedicated partition?

(A more correct view of my issue may be that the problem is with Sherlock documentation instead of this repo but I figured I would start here first.)

Running notebook w/ python 3

Hello! I wrote a notebook on my local in python 3 and want to run it on Sherlock. Once running the notebook in Sherlock, I don't see a way to change the kernel (I tried ml python=3.6.1, but it did not work). What do you suggest? Thank you in advance!

open source license

I would like to suggest adding an open source license (MIT or BSD3, or something in the GPL family) before we open this up to contributions from others (and myself!) Here are a list of options --> https://opensource.org/licenses. If you let me know which is your preference, I'd be glad to add in a PR.

julia+jupyter notebook

We have request / need from a user for a Jupyter notebook running Julia. The setup (if done natively) is pretty annoying so I'm going to give this a shot with a container. This might also be good opportunity to test adding another resource (farmshare) although I'm not certain yet. Feel free to assign me to this.

Contributing guidelines

In spirit of the license, we should add a CONTRIBUTING.md. I can take a first shot at this and show you a suggestion.

ControlPath can be too long

I've was using forward to run Jupyter notebooks on Sherlock, using the default ssh config in hosts/sherlock_ssh.sh, and recently encountered an SSH Error: ControlPath too long.

I discovered that it was because the default ssh config includes the full hostname in the ControlPath for the Sherlock connection. I was at SLAC, and the hostname my machine was assigned was surprisingly long. I fixed this by replacing the ControlPath name %l%r@%h:%p with %C, which gives a hash of %l%h%p%r (see ssh_config manpage).

(So… pr maybe?)

Add a Changelog

To the extent that we can (somewhat) keep track of changes (and it seems overkill to have tags/releases, but could eventually be done) we should minimally have a list of changes, and the associated Github commits when they are added can (again, somewhat) trace to a version.

argument parsing for sbatch scripts

The scripts are getting interesting enough that the user should be able to provide actual arguments so we can be more specific. Eg.., instead of:

bash start.sh <sbatch> <container> <directory>

they should be able to do:

bash start.sh <sbatch> --image=<container> --notebook-dir=<directory>

That way, ordering doesn't matter either, and we can have more optional arguments.

Recent start.sh issues

As of lately, when I run start.sh it starts up as normal then gives me the following output:

== Waiting for job to start, using exponential backoff ==
Attempt 0: not ready yet... retrying in 1..
Attempt 1: not ready yet... retrying in 2..
Attempt 2: resources allocated to sh-28-06!..
sh-28-06
sh-28-06
notebook running on sh-28-06

== Setting up port forwarding ==
ssh -L 56432:localhost:56432 sherlock ssh -L 56432:localhost:56432 -N sh-28-06 &
mux_client_forward: forwarding request failed: Port forwarding failed
muxclient: master forward request failed
[email protected]'s password: == Connecting to notebook ==
# automatically continues which doesn't allow me to input my password


== View logs in separate terminal ==
ssh sherlock cat /home/users/rzawadzk/forward-util/py3-jupyter.sbatch.out
ssh sherlock cat /home/users/rzawadzk/forward-util/py3-jupyter.sbatch.err

== Instructions ==
1. Password, output, and error printed to this terminal? Look at logs (see instruction above)
2. Browser: http://sh-02-21.int:56432/ -> http://localhost:56432/...
3. To end session: bash end.sh sherlock/py3-jupyter

DN525eol:forward royzawadzki$ Permission denied, please try again.
[email protected]'s password: 
# automatically continues which doesn't allow me to input my password
Permission denied, please try again.
[email protected]'s password: 
# automatically continues which doesn't allow me to input my password
[email protected]: Permission denied (gssapi-with-mic,password).

At the moment, when I get to this message and I try refreshing the link in the browser, the link is dead. If I run resume.sh however, the webpage becomes active.

remove port specification

I don't see any logic for asking the user to pre-generate a custom port, and then giving an error message if/when another notebook (by the same user) is opened. It would make more sense to generate a port on the fly, and risk that another user is using it.

I'll add this tweak to my current PR.

add check for previous notebooks

It can get confusing if the user has already submit a notebook (or other) with the same job - I'm going to add a check that advises the user to end or resume, given one is already running.

[sbatch] tensorflow

I'm going to give a go at trying this out for tensorflow with gpu - I think this would be very useful for like, everyone, lol.

resource for sherlock!

hey @raphtown ! I saw your email on the list, and wanted to say how great this is. This is a really cool resource, and I'd like to propose that I can test it out (notice I'm part of Research Computing at Stanford!) and then write up a little instructional on our lessons page. I'm thinking I might be able to containerize some of the steps to make it easier, but I haven't given it a try yet. If we are able to get everything working, I can PR to this repo with updates, and then do a little writeup (and I'd ask for your contribution if you are willing!) Let me know your thoughts - looking forward to trying it out.

Running a notebook: "ssh: Could not resolve hostname"

I have followed all the steps detailed here. When I get to the step where you start a notebook using bash start.sh jupyter /path/to/dir it doesn't connect, giving me a notice that it "could not resolve hostname." Here's what I'm running and the output (for context my SUNetID is rzawadzk):

DN52ekoj:forward royzawadzki$ bash start.sh sherlock/py3-jupyter ../BPA
== Finding Script ==
Looking for sbatches/rzawadzk/sherlock/py3-jupyter.sbatch
Looking for sbatches/sherlock/py3-jupyter.sbatch
Script      sbatches/sherlock/py3-jupyter.sbatch

== Checking for previous notebook ==
ssh: Could not resolve hostname rzawadzk: nodename nor servname provided, or not known
No existing sherlock/py3-jupyter jobs found, continuing...

== Getting destination directory ==
ssh: Could not resolve hostname rzawadzk: nodename nor servname provided, or not known
ssh: Could not resolve hostname rzawadzk: nodename nor servname provided, or not known

== Uploading sbatch script ==
ssh: Could not resolve hostname rzawadzk: nodename nor servname provided, or not known
lost connection

== Submitting sbatch ==
rzawadzk sbatch --job-name=sherlock/py3-jupyter --partition=normal --output=/forward-util/py3-jupyter.sbatch.out --error=/forward-util/py3-jupyter.sbatch.err --mem=20G --time=8:00:00 /forward-util/py3-jupyter.sbatch 59258 "../BPA"
ssh: Could not resolve hostname rzawadzk: nodename nor servname provided, or not known

== View logs in separate terminal ==
ssh rzawadzk cat /forward-util/py3-jupyter.sbatch.out
ssh rzawadzk cat /forward-util/py3-jupyter.sbatch.err

== Waiting for job to start, using exponential backoff ==
ssh: Could not resolve hostname rzawadzk: nodename nor servname provided, or not known
Attempt 0: not ready yet... retrying in 1..
ssh: Could not resolve hostname rzawadzk: nodename nor servname provided, or not known
Attempt 1: not ready yet... retrying in 2..
ssh: Could not resolve hostname rzawadzk: nodename nor servname provided, or not known
Attempt 2: not ready yet... retrying in 4..

And so on with the attempts. I'm not sure what's going on here.

USERNAME variable not changing

Hi, when attempting a log in to a Sherlock node it seems to be using my local USERNAME instead of the one initialized in the params.sh file. Im on Mac OS Catalina 10.15, I tried editing the start.sh file to hard code in my Sherlock username but it still is referencing my local USERNAME

Access denied by pam_slurm_adopt: you have no active jobs on this node

Hi Vanessa,
Thank you for creating this tool on sherlock. I was following the instructions at https://vsoch.github.io/lessons/sherlock-jupyter/ and I run into the following issue. I'm hoping you can help me understand the problem. When I run start.sh I get the following errors, and when I try to open the notebook in my browser (using the following address) it fails. But the job is running.

[tdaley@sh-ln08 login /scratch/PI/whwong/tdaley/programs/forward]$ bash start.sh py3-jupyter /scratch/PI/whwong/tdaley/sgRNA/CRISPRa-sgRNA-determinants/deepLearningMixtureRegression/
== Finding Script ==
Looking for sbatches/sherlock/py3-jupyter.sbatch
Script sbatches/sherlock/py3-jupyter.sbatch

== Checking for previous notebook ==
No existing py3-jupyter jobs found, continuing...

== Getting destination directory ==

== Uploading sbatch script ==
py3-jupyter.sbatch 100% 146 29.6KB/s 00:00

== Submitting sbatch ==
sbatch --job-name=py3-jupyter --partition=whwong --output=/home/users/tdaley/forward-util/py3-jupyter.sbatch.out --error=/home/users/tdaley/forward-util/py3-jupyter.sbatch.err --mem=20G --time=8:00:00 /home/users/tdaley/forward-util/py3-jupyter.sbatch 58668 "/scratch/PI/whwong/tdaley/sgRNA/CRISPRa-sgRNA-determinants/deepLearningMixtureRegression/"
Submitted batch job 34562816

== View logs in separate terminal ==
ssh sherlock cat /home/users/tdaley/forward-util/py3-jupyter.sbatch.out
ssh sherlock cat /home/users/tdaley/forward-util/py3-jupyter.sbatch.err

== Waiting for job to start, using exponential backoff ==
Attempt 0: not ready yet... retrying in 1..
Attempt 1: not ready yet... retrying in 2..
Attempt 2: not ready yet... retrying in 4..
Attempt 3: not ready yet... retrying in 8..
Attempt 4: not ready yet... retrying in 16..
Attempt 5: not ready yet... retrying in 32..
Attempt 6: resources allocated to sh-08-13!..
sh-08-13
sh-08-13
notebook running on sh-08-13

== Setting up port forwarding ==
ssh -L 58668:localhost:58668 sherlock ssh -L 58668:localhost:58668 -N sh-08-13 &
Access denied by pam_slurm_adopt: you have no active jobs on this node
Authentication failed.
== Connecting to notebook ==
[I 18:10:27.968 NotebookApp] Writing notebook server cookie secret to /tmp/jupyter/notebook_cookie_secret
[I 18:10:29.512 NotebookApp] Serving notebooks from local directory: /scratch/groups/whwong/tdaley/sgRNA/CRISPRa-sgRNA-determinants/deepLearningMixtureRegression
[I 18:10:29.512 NotebookApp] 0 active kernels
[I 18:10:29.512 NotebookApp] The Jupyter Notebook is running at: http://localhost:58667/
[I 18:10:29.512 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
slurmstepd: error: *** JOB 34562525 ON sh-08-13 CANCELLED AT 2018-12-22T18:14:40 ***

== View logs in separate terminal ==
ssh sherlock cat /home/users/tdaley/forward-util/py3-jupyter.sbatch.out
ssh sherlock cat /home/users/tdaley/forward-util/py3-jupyter.sbatch.err

== Instructions ==

  1. Password, output, and error printed to this terminal? Look at logs (see instruction above)
  2. Browser: http://sh-02-21.int:58668/ -> http://localhost:58668/...
  3. To end session: bash end.sh py3-jupyter

[tdaley@sh-ln08 login /scratch/PI/whwong/tdaley/programs/forward]$ jobs
34562816 whwong py3-jupy tdaley R 2:51 1 sh-08-13

Thank you for your help and I apologize if I missed something super obvious.

Trouble with Command Line Arguments to Launch Jupyter Notebooks into the Proper Directory

From here the way to launch jupyter notebooks would on your local computer to do something like bash start.sh <software> <path>. I've been attempting to play around with the argument, but all my permutations lead me into a page with only one folder forward-util with three files for py3-juptyer.sbatch, py3-jupyter.sbatch.err, and py3-jupyter.sbatch.out.

My situation is that I have a directory on my local computer with two subdirectories: the cloned directory (forward) and another directory with my .ipynb files called BPA. I cd into forward to start up sherlock with the following commands and outcomes:

  • bash start.sh sherlock/py3-jupyter ../BPA with the message directory not found and a page with the forward-util folder

  • the same thing above also happens when I put the absolute path

  • After moving the BPA directory into the forward directory and running bash start.sh sherlock/py3-jupyter BPA no error about the directory not being found this time, but still launches into the `

What is the proper syntax to get the files I want onto the jupyter notebooks page? Is it that the path is the path on the actual server?

Trouble logging in to launch jupyter notebook

Hi,

I'm attempting to run a jupyter notebook through sherlock on my local machine (Windows 10), following the instructions here:
https://vsoch.github.io/lessons/sherlock-jupyter/

I believe that I have successfully cloned the forward repo, generated the parameters file, and created the ssh credentials (all on my local computer). I then ran into a couple issues:

1. Issue creating password for jupyter notebook
I tried to create a password for the notebook using the following code from the $HOME folder:

$ sdev
$ ml python/3.6.1
$ ml py-jupyter/1.0.0_py36
$ which jupyter /share/software/user/open/py-jupyter/1.0.0_py36/bin/jupyter
$ jupyter notebook password

which resulted in the following error message:

$ which jupyter /share/software/user/open/py-jupyter/1.0.0_py36/bin/jupyter
/share/software/user/open/py-jupyter/1.0.0_py36/bin/jupyter
/share/software/user/open/py-jupyter/1.0.0_py36/bin/jupyter
$ jupyter notebook password
Enter password:
Verify password:
Traceback (most recent call last):
  File "/share/software/user/open/py-jupyter/1.0.0_py36/bin/jupyter-notebook", line 11, in <module>
    sys.exit(main())
  File "/share/software/user/open/py-jupyter/1.0.0_py36/lib/python3.6/site-packages/jupyter_core/application.py", line 267, in launch_instance
    return super(JupyterApp, cls).launch_instance(argv=argv, **kwargs)
  File "/share/software/user/open/py-jupyter/1.0.0_py36/lib/python3.6/site-packages/traitlets/config/application.py", line 658, in launch_instance
    app.start()
  File "/share/software/user/open/py-jupyter/1.0.0_py36/lib/python3.6/site-packages/notebook/notebookapp.py", line 1362, in start
    super(NotebookApp, self).start()
  File "/share/software/user/open/py-jupyter/1.0.0_py36/lib/python3.6/site-packages/jupyter_core/application.py", line 256, in start
    self.subapp.start()
  File "/share/software/user/open/py-jupyter/1.0.0_py36/lib/python3.6/site-packages/notebook/notebookapp.py", line 345, in start
    set_password(config_file=self.config_file)
  File "/share/software/user/open/py-jupyter/1.0.0_py36/lib/python3.6/site-packages/notebook/auth/security.py", line 148, in set_password
    config.NotebookApp.password = hashed_password
  File "/share/software/user/open/python/3.6.1/lib/python3.6/contextlib.py", line 89, in __exit__
    next(self.gen)
  File "/share/software/user/open/py-jupyter/1.0.0_py36/lib/python3.6/site-packages/notebook/auth/security.py", line 131, in persist_config
    with io.open(config_file, 'w', encoding='utf8') as f:
FileNotFoundError: [Errno 2] No such file or directory: '/home/users/ilow/.jupyter/jupyter_notebook_config.json'

I was able to get around this issue (I think) with the following:
$ mkdir ~/.jupyter

2. Issue logging in to launch notebook
I next tried to launch a notebook, following the instructions in Part 3 of the above tutorial:
$ bash start.sh py3-jupyter /home/users/ilow
which resulted in a strange loop where I kept entering my password, 2-factor authenticating, and then it would ask for my password/2FA again:

== Finding Script ==
Looking for sbatches/sherlock/py3-jupyter.sbatch
Script      sbatches/sherlock/py3-jupyter.sbatch

== Checking for previous notebook ==
mux_client_request_session: read from master failed: Connection reset by peer
[email protected]'s password:
Duo two-factor login for ilow

Enter a passcode or select one of the following options:

 1. Duo Push to XXX-XXX-5751
 2. Phone call to XXX-XXX-5751
 3. SMS passcodes to XXX-XXX-5751

Passcode or option (1-3): 1
ControlSocket /c/Users/ilow1/.ssh/[email protected]:22 already exists, disabling multiplexing
No existing py3-jupyter jobs found, continuing...

== Getting destination directory ==
mux_client_request_session: read from master failed: Connection reset by peer
[email protected]'s password:

Any idea why this would happen or how to remedy? I'm new to Sherlock and any advice would be much appreciated! Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.