Giter Club home page Giter Club logo

hpobenchexperimentutils's Issues

OSError: [Errno 39] Directory not empty: 'attribute_lock'

When running autogluon on some benchmarks, at the end of the optimization procedure (unfortunately before rewriting the trajectory) there happens the following error:

See also the complete log here:
run_NAS1SHOT1_autogluon_32_errlog.txt
run_NAS1SHOT1_autogluon_32.cmd_out.txt

@PhMueller: Can we fix this or safely try/except this error since the optimization completed?

[INFO] autogluon.core.searcher.bayesopt.tuning_algorithms.bo_algorithm at 2021-03-21 16:57:35,315 --- BO Algorithm: Selecting final set of candidates.
Exception ignored in: <function Bookkeeper.__del__ at 0x7fb3a8c30dd0>                                                                                                                                       
Traceback (most recent call last):
  File "/home/eggenspk/2020_Hpolib2/HPOBenchExperimentUtils/HPOBenchExperimentUtils/core/bookkeeper.py", line 328, in __del__    
    shutil.rmtree(self.lock_dir)
  File "/home/eggenspk/miniconda3CLUSTER/envs/hpobench_37/lib/python3.7/shutil.py", line 494, in rmtree
    _rmtree_safe_fd(fd, path, onerror)
  File "/home/eggenspk/miniconda3CLUSTER/envs/hpobench_37/lib/python3.7/shutil.py", line 436, in _rmtree_safe_fd
    onerror(os.rmdir, fullname, sys.exc_info())                                                       
  File "/home/eggenspk/miniconda3CLUSTER/envs/hpobench_37/lib/python3.7/shutil.py", line 434, in _rmtree_safe_fd
    os.rmdir(entry.name, dir_fd=topfd)                                                                
OSError: [Errno 39] Directory not empty: 'attribute_lock'                             
Exception ignored in: <function Bookkeeper.__del__ at 0x7fb3a8c30dd0>                                                                                                                                       
Traceback (most recent call last):
  File "/home/eggenspk/2020_Hpolib2/HPOBenchExperimentUtils/HPOBenchExperimentUtils/core/bookkeeper.py", line 328, in __del__
    shutil.rmtree(self.lock_dir)
  File "/home/eggenspk/miniconda3CLUSTER/envs/hpobench_37/lib/python3.7/shutil.py", line 498, in rmtree                                
    onerror(os.rmdir, path, sys.exc_info())
  File "/home/eggenspk/miniconda3CLUSTER/envs/hpobench_37/lib/python3.7/shutil.py", line 496, in rmtree                               
    os.rmdir(path) 
OSError: [Errno 39] Directory not empty: '/home/eggenspk/2020_Hpolib2/HPOBenchExperimentUtils/exp_outputs/NASBench1shot1SearchSpace1Benchmark/autogluon/run-1/lock_dir'
[ERROR] autogluon.core.scheduler.hyperband at 2021-03-21 16:57:58,288 --- Traceback (most recent call last):
  File "/home/eggenspk/miniconda3CLUSTER/envs/hpobench_37/lib/python3.7/multiprocessing/managers.py", line 811, in _callmethod                                                                              
    conn = self._tls.connection
AttributeError: 'ForkAwareLocal' object has no attribute 'connection'  

During handling of the above exception, another exception occurred:
Traceback (most recent call last):
  File "/home/eggenspk/miniconda3CLUSTER/envs/hpobench_37/lib/python3.7/site-packages/autogluon/core/utils/custom_process.py", line 16, in run
    mp.Process.run(self)
  File "/home/eggenspk/miniconda3CLUSTER/envs/hpobench_37/lib/python3.7/multiprocessing/process.py", line 99, in run
    self._target(*self._args, **self._kwargs)
  File "/home/eggenspk/miniconda3CLUSTER/envs/hpobench_37/lib/python3.7/site-packages/autogluon/core/scheduler/scheduler.py", line 157, in _worker
    ret = fn(**args)
  File "/home/eggenspk/miniconda3CLUSTER/envs/hpobench_37/lib/python3.7/site-packages/autogluon/core/decorator.py", line 60, in __call__
    output = self.f(args, **kwargs)
  File "/home/eggenspk/miniconda3CLUSTER/envs/hpobench_37/lib/python3.7/site-packages/autogluon/core/decorator.py", line 143, in wrapper_call
    return func(*args, **kwargs)
  File "/home/eggenspk/2020_Hpolib2/HPOBenchExperimentUtils/HPOBenchExperimentUtils/optimizer/autogluon_optimizer.py", line 150, in objective_function
    **self.settings_for_sending)
  File "/home/eggenspk/2020_Hpolib2/HPOBenchExperimentUtils/HPOBenchExperimentUtils/core/bookkeeper.py", line 40, in wrapped
    self.increase_total_tae_used(1)
  File "/home/eggenspk/2020_Hpolib2/HPOBenchExperimentUtils/HPOBenchExperimentUtils/core/bookkeeper.py", line 290, in increase_total_tae_used
    self.total_tae_calls_proxy.value = self.total_tae_calls_proxy.value + total_tae_used
  File "/home/eggenspk/miniconda3CLUSTER/envs/hpobench_37/lib/python3.7/multiprocessing/managers.py", line 1138, in get
    return self._callmethod('get')
  File "/home/eggenspk/miniconda3CLUSTER/envs/hpobench_37/lib/python3.7/multiprocessing/managers.py", line 815, in _callmethod
    self._connect()
  File "/home/eggenspk/miniconda3CLUSTER/envs/hpobench_37/lib/python3.7/multiprocessing/managers.py", line 802, in _connect
    conn = self._Client(self._token.address, authkey=self._authkey)
  File "/home/eggenspk/miniconda3CLUSTER/envs/hpobench_37/lib/python3.7/multiprocessing/connection.py", line 492, in Client
    c = SocketClient(address)
  File "/home/eggenspk/miniconda3CLUSTER/envs/hpobench_37/lib/python3.7/multiprocessing/connection.py", line 620, in SocketClient
    s.connect(address)
FileNotFoundError: [Errno 2] No such file or directory
NoneType: None
Traceback (most recent call last):
  File ".//HPOBenchExperimentUtils/run_benchmark.py", line 195, in <module>
    run_benchmark(**vars(args), **benchmark_params) 
  File ".//HPOBenchExperimentUtils/run_benchmark.py", line 157, in run_benchmark
    and not tae_exceeds_limit(benchmark.get_total_tae_used(), settings['tae_limit']) \
  File "/home/eggenspk/2020_Hpolib2/HPOBenchExperimentUtils/HPOBenchExperimentUtils/core/bookkeeper.py", line 251, in get_total_tae_used
    with lock:
  File "/home/eggenspk/miniconda3CLUSTER/envs/hpobench_37/lib/python3.7/contextlib.py", line 112, in __enter__
    return next(self.gen)
  File "/home/eggenspk/miniconda3CLUSTER/envs/hpobench_37/lib/python3.7/site-packages/oslo_concurrency/lockutils.py", line 270, in lock
    ext_lock.acquire(delay=delay)
  File "/home/eggenspk/miniconda3CLUSTER/envs/hpobench_37/lib/python3.7/site-packages/fasteners/process_lock.py", line 156, in acquire
    self._do_open()
  File "/home/eggenspk/miniconda3CLUSTER/envs/hpobench_37/lib/python3.7/site-packages/fasteners/process_lock.py", line 128, in _do_open
    self.lockfile = open(self.path, 'a')
FileNotFoundError: [Errno 2] No such file or directory: b'/home/eggenspk/2020_Hpolib2/HPOBenchExperimentUtils/exp_outputs/NASBench1shot1SearchSpace1Benchmark/autogluon/run-1/lock_dir/attribute_lock/attrib
ute_lock'

Some confusion about the best-known values.

# for k in keys:
# t1 = a.data[(777, "valid_acc1es")][k][199]
# t2 = a.data[(888, "valid_acc1es")][k][199]
# t3 = a.data[(999, "valid_acc1es")][k][199]
# te.append(float(100 - np.mean([t1, t2, t3])))
# v1 = a.data[(777, "test_acc1es")][k]
# v2 = a.data[(888, "test_acc1es")][k]
# v3 = a.data[(999, "test_acc1es")][k]
# ve.append(float(100 - np.mean([v1, v2, v3])))
# best_test = np.min(te)
# best_valid = np.min(ve)
# print(b, best_test, best_valid)

I wonder about that why the valid_acc1es is used to calculate the best_test and the test_acc1es is used to calculate the best_valid ?

Save error and stack traces to file

Since we have now shifted to a multi-processing style setup for running the benchmarks, it makes a lot of sense to save stack traces and error logs to disk for each process launched by the bookkeeper. A step towards this has been taken in f67c00b, which caught a very important and common error during JSON serialization: in the cases that I was testing, the results generated by the benchmark could not be successfully serialized due to reasons which will need to be further investigated.

This error could not be caught and debugged normally because dragonfly itself calls the objective from within an asynchronous sub-process and has very poor error handling. I imagine it might devolve into a recurring issue where raised errors fly under the radar because the stdout and stderr of the currently executing process don't necessarily receive outputs from all running sub-processes. Stack traces are of particular importance here. At the very least, we should decide upon some conventions on how to add new stack traces as and where the need arises.

Python's traceback module seems very promising and easy to use for our purposes.

Move optimizer libraries to extra requirements

Given the nature of the repo, I believe it should be possible (and well worth the effort) to write code such that there is no need to install all the optimizers that it supports in order to run it and only install the ones that are needed. This is of particular relevance now that I'm including Dragonfly code in the development branch. I believe that if we shift the respective imports to the respective 'run' methods, it should suffice.

Post-timeout hook/event

Proposal: Once the process running the optimizer is terminated externally, there should be a standardized way of handling optimizer shut-down procedures, if needed.

Rationale: This is the equivalent of having an explicit object destructor and carries with it almost all of the same rationales.

Path resolution into absolute paths

@PhMueller
I encountered this while working on the dragonfly branch, but we should make sure that any Path objects that we construct and that are expected to stay alive for more than a few lines of code are resolved into absolute paths as soon as they are constructed. This becomes very relevant when optimizers, which are technically black-boxes as far as the scope of this repo is concerned, do directory magic which may or may not be context-safe. Case in point, dragonfly uses file-based multi-process communication, and thus is going to be run in the /tmp directory so it doesn't cause other issues. If it is passed any relative paths from HPOlibExperimentUtils, or fails to return to the old working directory, bad things could happen.

I already did so for the output directory here: b4491c8

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.