Giter Club home page Giter Club logo

Comments (11)

jdsn avatar jdsn commented on July 18, 2024

There are even more files that are created by mkcloud in cwd.
mkcloud is not made to be run directly from 'within' a git checkout, as it supports parallel runs. So there must be some distinction between these runs. And we usually do this via directories.

The mkcloud hosts use the update_automation script to create symlinks for several tools in ~/bin. Then a new mkcloud run is started from a dedicated directory (as it now is accessible via $PATH).
For using it on workstations such a symlink (once set manually) works as well (no need for update_automation).

So in short, we use it kind of that way (thats just pseudo code to show what must be possible):

mkdir cloud.v1 cloud.v2
cd cloud.v1
# configure cloud.v1 (or source its config file)
mkcloud plain &
cd ../cloud.v2
# configure cloud.v2 (or source its config file)
mkcloud plain &

Files (that are interesting for debugging the current run) are written only to cwd (I guess some libvirt xml files go to /tmp, but they are ususally not needed after a run). mkcloud itself also should not cd to an arbitrary directory on the host.

Maybe this kind of usage should be better documented.

We could change that behaviour, sure. A mkdir -p ~/mkcloud.${cloud} ; cd ~/mkcloud.${cloud} could to the trick. But we need to be careful with such a change, as we need to adapt the jenkins setup and the mkcloud workers at the very moment of the merge.

from automation.

aspiers avatar aspiers commented on July 18, 2024

mkcloud is not made to be run directly from 'within' a git checkout

s/is/was/

I realise it started out life with Jenkins as the only use case, but it has grown way beyond that.

as it supports parallel runs. So there must be some distinction between these runs. And we usually do this via directories.

I'm curious - what are the other ways, if not via directories?

Maybe this kind of usage should be better documented.

Absolutely! There is no "maybe" here. By now the existing docs do imply the use of directories, but still need to be more explicit about it, e.g. about what files are created.

We could change that behaviour, sure. A mkdir -p ~/mkcloud.${cloud} ; cd ~/mkcloud.${cloud} could to the trick.

Yes I think this would be a much better approach, but it should be FHS-compliant since mkcloud runs as root.

from automation.

aspiers avatar aspiers commented on July 18, 2024

I wonder if a lot of the mess in the way we currently have all these /root/manual.$USERNAME/ directories is a result of this issue. If mkcloud automatically stored everything to places like /var/{run,cache,log}/mkcloud, would we even need to worry about creating directory trees from a template?

from automation.

bmwiedemann avatar bmwiedemann commented on July 18, 2024

I guess, the easiest thing would be to replace some uses of cwd with a tmpdir=/tmp/mkcloud.$$/$filename

from automation.

aspiers avatar aspiers commented on July 18, 2024

Not /tmp - /var/{run,cache,log}.

from automation.

jdsn avatar jdsn commented on July 18, 2024

I wonder if a lot of the mess in the way we currently have all these /root/manual.$USERNAME/ directories is a result of this issue.

@aspiers From my point of view this is done for two reasons:

  1. to have separated his own mkcloud config files
  2. to have all files that mkcloud creates (its a lot) within a single directory, that can be grepped and removed easily

Btw. the prefix "manual" was used to differentiate from the jenkins mkcloud runs on the same host.

If mkcloud automatically stored everything to places like /var/{run,cache,log}/mkcloud, would we even need to worry about creating directory trees from a template?

And we would need a tool to do the cleanup of all old/unused files. If they are cluttered at various places this makes it quite hard without having a tool.
Even with a tool its no longer possible to do a quick check (using 'ls' and 'du' within one directory).

So all these changes open new issues.

@bmwiedemann

I guess, the easiest thing would be to replace some uses of cwd with a tmpdir=/tmp/mkcloud.$$/$filename

No, not some. If we change something here, then lets do it right and have mkcloud put absolutely nothing in cwd, but all in a dedicated new directory.

from automation.

aspiers avatar aspiers commented on July 18, 2024

I wonder if a lot of the mess in the way we currently have all these /root/manual.$USERNAME/ directories is a result of this issue.

@aspiers From my point of view this is done for two reasons

  1. to have separated his own mkcloud config files

That's not too convincing, because you could instead have a single configs/ directory containing files like cloud7.aspiers, cloud7.jdsn etc. This would encourage sharing and reuse of scenarios between people.

  1. to have all files that mkcloud creates (its a lot) within a single directory, that can be removed easily

If mkcloud automatically stored everything to places like /var/{run,cache,log}/mkcloud, would we even need to worry about creating directory trees from a template?

And we would need a tool to do the cleanup of all old/unused files. If they are cluttered at various places this makes it quite hard without having a tool.

But we already need a tool to do cleanup. It's not just these files which are created, it's also a lot of state inside libvirt, LVM, backing storage, iptables rules, changes to /etc ... the list goes on.

Even with a tool its no longer possible to do a quick check (using 'ls' and 'du' within one directory).

Why do you need ls or du? If you're worried about large files like log files or supportconfigs, then it's easier to do clean up if all those are in the same place, e.g. /var/log/mkcloud.

So all these changes open new issues.

That's true at least :-)

@bmwiedemann

I guess, the easiest thing would be to replace some uses of cwd with a tmpdir=/tmp/mkcloud.$$/$filename

No, not some. If we change something here, then lets do it right and have mkcloud put absolutely nothing in cwd, but all in a dedicated new directory.

... or in multiple directories. I agree there are pros and cons either way, but let's remain mindful that the FHS was designed with careful intentions, and they must have had good reasons for deciding it was worth having pid files and log files (for example) in different directories. So let's ask ourselves, what are those reasons, and do they apply to mkcloud?

from automation.

jdsn avatar jdsn commented on July 18, 2024

@aspiers

That's not too convincing

I don't want to convince you. I just explained why it is this way currently.

Why do you need ls or du?

Just an example: You quickly get to know that run X took 5MB logs and run Y took 6GB logs. Thats just a 'du -sch manual.*' away. Spreading files in the fs tree makes this overview harder.

If you're worried about large files like log files or supportconfigs, then it's easier to do clean up if all those are in the same place, e.g. /var/log/mkcloud.

My point here was that $somewhen not all supportconfigs become old and can be cleaned up, but an entire cloud run becomes old and all its related files need to be cleaned up. This is easier if all these files a collected in a single directory (wherever that might be located).
Having a single location to collect all supportconfigs does not help by itself. IMHO it makes the mess even worse (if not accounted for by tooling).

from automation.

jdsn avatar jdsn commented on July 18, 2024

But we already need a tool to do cleanup.

Well its mkcloud itself cleaning up the last cloud (that is still running) of the slot it should use for its run. But thats by design, so that people are able to connect to the cloud as long as it is still running.

It's not just these files which are created, it's also a lot of state inside libvirt, LVM, backing storage, iptables rules, changes to /etc ... the list goes on.

Sure, but many of them are not specific for a single mkcloud run, but generic changes. And they are all very small in size. I won't start to clean up in /etc when the disk is at 100%.
That does not mean that we should keep it that way. I am just caring about a simple way to easily get an overview about and delete files of individual mkcloud runs.

from automation.

aspiers avatar aspiers commented on July 18, 2024

I don't want to convince you. I just explained why it is this way currently.

OK :-)

Why do you need ls or du?

Just an example: You quickly get to know that run X took 5MB logs and run Y took 6GB logs. Thats just a 'du -sch manual.*' away. Spreading files in the fs tree makes this overview harder.

Does it really, though? Firstly mkcloud does not automatically log the run output at all currently; in fact I have just submitted #1205 for that. But even if it did, if you have everything in the same directory, then you're measuring the size of not just logs, but everything, including supportconfigs. OTOH, with #1205, du /var/log/mkcloud/$cloudname is much more precise.

If you're worried about large files like log files or supportconfigs, then it's easier to do clean up if all those are in the same place, e.g. /var/log/mkcloud.

My point here was that $somewhen not all supportconfigs become old and can be cleaned up, but an entire cloud run becomes old and all its related files need to be cleaned up. This is easier if all these files a collected in a single directory (wherever that might be located).

As I already pointed out, I think this is a fallacy: it misleads people into thinking they can correctly clean up an entire cloud with one simple rm -rf, when in fact that is incorrect.

Having a single location to collect all supportconfigs does not help by itself. IMHO it makes the mess even worse (if not accounted for by tooling).

But we already need a tool to do cleanup.

Well its mkcloud itself cleaning up the last cloud (that is still running) of the slot it should use for its run. But thats by design, so that people are able to connect to the cloud as long as it is still running.

Sure, when I said "we already need a tool", I was also trying to imply "we already have a tool", and it's called mkcloud. Just as the right way to build an mkcloud cloud is with mkcloud, the right way to delete one is also using mkcloud.

It's not just these files which are created, it's also a lot of state inside libvirt, LVM, backing storage, iptables rules, changes to /etc ... the list goes on.

Sure, but many of them are not specific for a single mkcloud run, but generic changes. And they are all very small in size. I won't start to clean up in /etc when the disk is at 100%.
That does not mean that we should keep it that way. I am just caring about a simple way to easily get an overview about and delete files of individual mkcloud runs.

What do you mean by "files of individual mkcloud runs", exactly? I agree there should be an easy way to get this overview and delete files, but I don't think keeping everything in one directory requires this, or even particularly assists it.

from automation.

jdsn avatar jdsn commented on July 18, 2024

But even if it did, if you have everything in the same directory, then you're measuring the size of not just logs, but everything, including supportconfigs.

Exactly. And it is exactly what I want :) Obviously not what you want, though.

OTOH, with #1205, du /var/log/mkcloud/$cloudname is much more precise

Getting the size of only the log does not help us maintaining the workers. When cleaning up it does not make sense to delete parts of a run (like only delete log, or only delete supportconfigs), if we clean up we delete everything from this run, and that operation must be easy.
If that is solved differently - fine.

it misleads people into thinking they can correctly clean up an entire cloud with one simple rm -rf

I think, if someone working with openstack assumes that a rm -rf will shut down VMs on his machine then we have totally different problem to solve :)

What do you mean by "files of individual mkcloud runs", exactly?

logs (the ones I do myself by adding | tee mytodayslog), supportconfigs, mkcloud.config, custom overlay files (mkcloud | qa_crowbarsetup | install-chef-suse), crowbar backup tarball, tempest logs, qa test logs ... maybe there are even more.

I agree there should be an easy way to get this overview and delete files, but I don't think keeping everything in one directory requires this,

sure

or even particularly assists it.

LOL

from automation.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.