Giter Club home page Giter Club logo

ocis's Introduction

ownCloud Infinite Scale

Matrix Rocket chat Build Status Security Rating Coverage Go Report Go Doc oCIS docker image License

Introduction

ownCloud Infinite Scale (oCIS) is the new file sync & share platform that will be the foundation of your data management platform.

Make sure to download the latest released version today!

Overview

Clients

Infinite Scale allows the following ownCloud clients:

to synchronize and share file spaces with a scalable server backend based on reva using open and well-defined APIs like WebDAV and CS3.

Web Office Applications

Infinite Scale can integrate web office applications such as:

Collaborative editing is supported by the WOPI application gateway.

Authentication

Users are authenticated via OpenID Connect using either an external IdP like Keycloak or the embedded LibreGraph Connect identity provider.

Installation

With focus on easy install and operation, Infinite Scale is delivered as a single binary or container that allows scaling from a Raspberry Pi to a Kubernetes cluster by changing the configuration and starting multiple services as needed. The multiservice architecture allows tailoring the functionality to your needs and reusing services that may already be in place like when using Keycloak. See the details below for various installation options.

Important Readings

Before starting to set up an instance, we highly recommend reading the Prerequisites, the Deployment section and especially the General Information page describing and explaining information that is valid for all deployment types.

Run ownCloud Infinite Scale

Use the Official Documentation

See the Quick Guide or the Binary Setup for a single-node bare-metal deployment starting with a Raspberry Pi or single server, the Container Setup for classic container environments like docker or learn how to deploy to Kubernetes.

Use the ocis Repo as Source

Use this method to run an instance with the latest code. This is only recommended for development purposes. The minimum go version required is 1.22. Note that you need, as a prerequisite, a C compile environment installed because some dependencies like reva have components that require c-go libraries/tool-chains. The command installing for debian based systems is: sudo apt install build-essentials. To build and run a local instance with demo users:

# get the source
git clone [email protected]:owncloud/ocis.git

# enter the ocis dir
cd ocis

# generate assets
make generate

# build the binary
make -C ocis build

# initialize a minimal oCIS configuration
./ocis/bin/ocis init

# run with demo users
IDM_CREATE_DEMO_USERS=true ./ocis/bin/ocis server

# Open your browser on http://localhost:9200 to access the bundled web-ui 

All batteries included: no external database, no external IDP needed!

Documentation

Admin Documentation

Refer to the Admin Documentation - Introduction to Infinite Scale to get started with running oCIS in production.

Development Documentation

See the Development Documentation - Getting Started to get an overview of Requirements, the repository structure and other starting points.

Security

See the Security Aspects for a general overview of security related topics. If you find a security issue, please contact [email protected] first.

Contributing

We are very happy that oCIS does not require a Contributor License Agreement (CLA) as it is Apache 2.0 licensed. We hope this will make it easier to contribute code. If you want to get in touch, most of the developers hang out in our matrix channel, our rocket chat channel or reach out to the ownCloud central forum.

Infinite Scale is carefully internationalized so that everyone, no matter what language they speak, has a great experience. To achieve this, we rely on the help of volunteer translators. If you want to help, you can find the projects behind the following links: Transifex for ownCloud web and Transifex for ownCloud (Select the resource by filtering for ocis-).

Please always refer to our Contribution Guidelines.

End User License Agreement

Some builds of stable ownCloud Infinite Scale releases provided by ownCloud GmbH are subject to an End User License Agreement.

Copyright

Copyright (c) 2020-2023 ownCloud GmbH <https://owncloud.com>

ocis's People

Contributors

2403905 avatar aduffeck avatar ainmosni avatar butonic avatar c0rby avatar deepdiver1975 avatar dependabot[bot] avatar dragonchaser avatar excds avatar fschade avatar grgprarup avatar iljan avatar individual-it avatar jammingben avatar jvillafanez avatar kiranparajuli589 avatar kobergj avatar kulmann avatar micbar avatar mmattel avatar ownclouders avatar phil-davis avatar refs avatar rhafer avatar sagargi avatar saw-jan avatar scharfviktor avatar swikritit avatar tboerger avatar wkloucek avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ocis's Issues

[reva] locking support

-> running `locks':
 0. init.................. pass
 1. begin................. pass
 2. options............... pass
 3. precond............... pass
 4. init_locks............ pass
 5. put................... pass
 6. lock_excl............. pass
 7. discover.............. pass
 8. refresh............... pass
 9. notowner_modify....... FAIL (DELETE of locked resource should fail)
10. notowner_lock......... FAIL (UNLOCK with bogus lock token)
11. owner_modify.......... FAIL (PROPPATCH on locked resouce on `/remote.php/webdav/litmus/lockme': 501 Not Implemented)
12. notowner_modify....... FAIL (DELETE of locked resource should fail)
13. notowner_lock......... FAIL (UNLOCK with bogus lock token)
14. copy.................. FAIL (could not COPY locked resource:
404 Not Found)
15. cond_put.............. SKIPPED
16. fail_cond_put......... SKIPPED
17. cond_put_with_not..... pass
18. cond_put_corrupt_token FAIL (conditional PUT with invalid lock-token should fail: 204 No Content)
19. complex_cond_put...... pass
20. unlock................ pass
21. fail_cond_put_unlocked FAIL (conditional PUT with invalid lock-token should fail: 204 No Content)
22. lock_shared........... FAIL (requested lockscope not satisfied!  got shared, wanted exclusive)
23. notowner_modify....... SKIPPED
24. notowner_lock......... SKIPPED
25. owner_modify.......... SKIPPED
26. double_sharedlock..... SKIPPED
27. notowner_modify....... SKIPPED
28. notowner_lock......... SKIPPED
29. unlock................ SKIPPED
30. prep_collection....... pass
31. lock_collection....... pass
32. owner_modify.......... FAIL (PROPPATCH on locked resouce on `/remote.php/webdav/litmus/lockcoll/lockme.txt': 501 Not Implemented)
33. notowner_modify....... FAIL (DELETE of locked resource should fail)
34. refresh............... pass
35. indirect_refresh...... pass
36. unlock................ pass
37. unmapped_lock......... WARNING: LOCK on unmapped url returned 200 not 201 (RFC4918:S7.3)
    ...................... pass (with 1 warning)
38. unlock................ pass
39. finish................ pass
-> 9 tests were skipped.
<- summary for `locks': of 31 tests run: 20 passed, 11 failed. 64.5%

needs cs3 api changes, unless we want to hack our way around it

Ansible deployment

The idea is to be able to deploy and maintain a production instance with nexus as the starting point. This would allow support to reproduce issues based on the ansible scripts.

  • add a make deployment that uses a dedicated repository with ansible scripts

[reva] list outgoing shares

  • User shares
  • Public shares

restricted to user and public shares. Group, guests and federated shares are out of this issue.

curl 'https://phoenix.owncloud.com/ocs/v2.php/apps/files_sharing//api/v1/shares?format=json&include_tags=true&format=json' \
-H 'pragma: no-cache' \
-H 'cookie: oc_sessionPassphrase=yODxGWYV1Lj2YVhEaZNBAFytjKBlvLhtW2Gg63L5o3fwyM1neQexQ93%2FtMPkk3v%2BbViPNcs7%2BErTXRI50b0uaOc7anFJRFPjjUADrb7NG%2BMEGJrl8231MA3zyc3dRK2q; ocmru40aixqr=jguuule56tpt2ojnvtvf6g7c3c' \
-H 'accept-encoding: gzip, deflate, br' \
-H 'accept-language: en-US,en;q=0.9,de;q=0.8' \
-H 'user-agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.110 Safari/537.36' \
-H 'ocs-apirequest: true' \
-H 'accept: */*' \
-H 'cache-control: no-cache' \
-H 'authority: phoenix.owncloud.com' \
-H 'authorization: Bearer xRjL4AeJlsbQjdJsEt1QMbPWRhskMTSlmrIVkEUEiECkhkCvtc46BCuJcI6dqIIO' \
-H 'referer: https://phoenix.owncloud.com/custom/phoenix/index.html' \
-H 'x-request-id: ee314d4e-5d77-4768-b459-466684fa2ea7' --compressed | jq
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  6332  100  6332    0     0  18900      0 --:--:-- --:--:-- --:--:-- 18958
{
  "ocs": {
    "meta": {
      "status": "ok",
      "statuscode": 200,
      "message": null,
      "totalitems": "",
      "itemsperpage": ""
    },
    "data": [
      {
        "id": "3",
        "share_type": 0,
        "uid_owner": "demo",
        "displayname_owner": "demo",
        "permissions": 19,
        "stime": 1557304820,
        "parent": null,
        "expiration": null,
        "token": null,
        "uid_file_owner": "demo",
        "displayname_file_owner": "demo",
        "path": "/ownCloud Manual.pdf",
        "item_type": "file",
        "mimetype": "application/pdf",
        "storage_id": "home::demo",
        "storage": 3,
        "item_source": 89,
        "file_source": 89,
        "file_parent": 22,
        "file_target": "/ownCloud Manual (2).pdf",
        "share_with": "admin",
        "share_with_displayname": "admin",
        "share_with_additional_info": "admin",
        "mail_send": 0,
        "attributes": null,
        "tags": []
      },
      {
        "id": "74",
        "share_type": 3,
        "uid_owner": "demo",
        "displayname_owner": "demo",
        "permissions": 5,
        "stime": 1566479445,
        "parent": null,
        "expiration": null,
        "token": "zt4Jl8dd5UiCk4K",
        "uid_file_owner": "demo",
        "displayname_file_owner": "demo",
        "path": "/First",
        "item_type": "folder",
        "mimetype": "httpd/unix-directory",
        "storage_id": "home::demo",
        "storage": 3,
        "item_source": 184,
        "file_source": 184,
        "file_parent": 22,
        "file_target": "/First",
        "name": "Public link",
        "url": "https://phoenix.owncloud.com/s/zt4Jl8dd5UiCk4K",
        "mail_send": 0,
        "attributes": null,
        "tags": []
      },
      {
        "id": "24",
        "share_type": 0,
        "uid_owner": "demo",
        "displayname_owner": "demo",
        "permissions": 19,
        "stime": 1561123833,
        "parent": null,
        "expiration": null,
        "token": null,
        "uid_file_owner": "demo",
        "displayname_file_owner": "demo",
        "path": "/spreadsheet.odp",
        "item_type": "file",
        "mimetype": "application/vnd.oasis.opendocument.presentation",
        "storage_id": "home::demo",
        "storage": 3,
        "item_source": 124,
        "file_source": 124,
        "file_parent": 22,
        "file_target": "/spreadsheet.odp",
        "share_with": "admin",
        "share_with_displayname": "admin",
        "share_with_additional_info": "admin",
        "mail_send": 0,
        "attributes": null,
        "tags": []
      },
      {
        "id": "44",
        "share_type": 0,
        "uid_owner": "demo",
        "displayname_owner": "demo",
        "permissions": 3,
        "stime": 1564485650,
        "parent": null,
        "expiration": null,
        "token": null,
        "uid_file_owner": "demo",
        "displayname_file_owner": "demo",
        "path": "/defaults.php",
        "item_type": "file",
        "mimetype": "application/x-php",
        "storage_id": "home::demo",
        "storage": 3,
        "item_source": 120,
        "file_source": 120,
        "file_parent": 22,
        "file_target": "/defaults.php",
        "share_with": "admin",
        "share_with_displayname": "admin",
        "share_with_additional_info": "admin",
        "mail_send": 0,
        "attributes": null,
        "tags": []
      },
      {
        "id": "73",
        "share_type": 0,
        "uid_owner": "demo",
        "displayname_owner": "demo",
        "permissions": 1,
        "stime": 1566474991,
        "parent": null,
        "expiration": null,
        "token": null,
        "uid_file_owner": "demo",
        "displayname_file_owner": "demo",
        "path": "/dasdasd",
        "item_type": "file",
        "mimetype": "application/octet-stream",
        "storage_id": "home::demo",
        "storage": 3,
        "item_source": 461,
        "file_source": 461,
        "file_parent": 22,
        "file_target": "/dasdasd",
        "share_with": "share3",
        "share_with_displayname": "John Richards the Emperor of Long Names",
        "share_with_additional_info": "share3",
        "mail_send": 0,
        "attributes": null,
        "tags": []
      },
      {
        "id": "73",
        "share_type": 0,
        "uid_owner": "demo",
        "displayname_owner": "demo",
        "permissions": 1,
        "stime": 1566474991,
        "parent": null,
        "expiration": null,
        "token": null,
        "uid_file_owner": "demo",
        "displayname_file_owner": "demo",
        "path": "/dasdasd",
        "item_type": "file",
        "mimetype": "application/octet-stream",
        "storage_id": "home::demo",
        "storage": 3,
        "item_source": 461,
        "file_source": 461,
        "file_parent": 22,
        "file_target": "/dasdasd",
        "share_with": "share3",
        "share_with_displayname": "John Richards the Emperor of Long Names",
        "share_with_additional_info": "share3",
        "mail_send": 0,
        "attributes": null,
        "tags": []
      },
      {
        "id": "19",
        "share_type": 1,
        "uid_owner": "demo",
        "displayname_owner": "demo",
        "permissions": 1,
        "stime": 1559737320,
        "parent": null,
        "expiration": null,
        "token": null,
        "uid_file_owner": "admin",
        "displayname_file_owner": "admin",
        "path": "/Mini LED Display.dll",
        "item_type": "file",
        "mimetype": "application/octet-stream",
        "storage_id": "shared::/Mini LED Display.dll",
        "storage": 2,
        "item_source": 220,
        "file_source": 220,
        "file_parent": 22,
        "file_target": "/Mini LED Display.dll",
        "share_with": "admin",
        "share_with_displayname": "admin",
        "mail_send": 0,
        "attributes": null,
        "tags": []
      },
      {
        "id": "66",
        "share_type": 3,
        "uid_owner": "demo",
        "displayname_owner": "demo",
        "permissions": 1,
        "stime": 1566376223,
        "parent": null,
        "expiration": null,
        "token": "pgWAXcEpOl448Gt",
        "uid_file_owner": "demo",
        "displayname_file_owner": "demo",
        "path": "/ok.dms",
        "item_type": "file",
        "mimetype": "application/octet-stream",
        "storage_id": "home::demo",
        "storage": 3,
        "item_source": 449,
        "file_source": 449,
        "file_parent": 22,
        "file_target": "/ok.dms",
        "name": "Public link",
        "url": "https://phoenix.owncloud.com/s/pgWAXcEpOl448Gt",
        "mail_send": 0,
        "attributes": null,
        "tags": [
          "_$!<Favorite>!$_"
        ]
      },
      {
        "id": "72",
        "share_type": 3,
        "uid_owner": "demo",
        "displayname_owner": "demo",
        "permissions": 4,
        "stime": 1566473741,
        "parent": null,
        "expiration": "2020-03-03 00:00:00",
        "token": "W4W1Doast9EBpXD",
        "uid_file_owner": "demo",
        "displayname_file_owner": "demo",
        "path": "/test",
        "item_type": "folder",
        "mimetype": "httpd/unix-directory",
        "storage_id": "home::demo",
        "storage": 3,
        "item_source": 415,
        "file_source": 415,
        "file_parent": 22,
        "file_target": "/test",
        "share_with": "***redacted***",
        "share_with_displayname": "***redacted***",
        "name": "Public link",
        "url": "https://phoenix.owncloud.com/s/W4W1Doast9EBpXD",
        "mail_send": 0,
        "attributes": null,
        "tags": []
      },
      {
        "id": "72",
        "share_type": 3,
        "uid_owner": "demo",
        "displayname_owner": "demo",
        "permissions": 4,
        "stime": 1566473741,
        "parent": null,
        "expiration": "2020-03-03 00:00:00",
        "token": "W4W1Doast9EBpXD",
        "uid_file_owner": "demo",
        "displayname_file_owner": "demo",
        "path": "/test",
        "item_type": "folder",
        "mimetype": "httpd/unix-directory",
        "storage_id": "home::demo",
        "storage": 3,
        "item_source": 415,
        "file_source": 415,
        "file_parent": 22,
        "file_target": "/test",
        "share_with": "***redacted***",
        "share_with_displayname": "***redacted***",
        "name": "Public link",
        "url": "https://phoenix.owncloud.com/s/W4W1Doast9EBpXD",
        "mail_send": 0,
        "attributes": null,
        "tags": []
      },
      {
        "id": "74",
        "share_type": 3,
        "uid_owner": "demo",
        "displayname_owner": "demo",
        "permissions": 5,
        "stime": 1566479445,
        "parent": null,
        "expiration": null,
        "token": "zt4Jl8dd5UiCk4K",
        "uid_file_owner": "demo",
        "displayname_file_owner": "demo",
        "path": "/First",
        "item_type": "folder",
        "mimetype": "httpd/unix-directory",
        "storage_id": "home::demo",
        "storage": 3,
        "item_source": 184,
        "file_source": 184,
        "file_parent": 22,
        "file_target": "/First",
        "name": "Public link",
        "url": "https://phoenix.owncloud.com/s/zt4Jl8dd5UiCk4K",
        "mail_send": 0,
        "attributes": null,
        "tags": []
      }
    ]
  }
}

[reva] owncloud storage sharing with guest accounts

Instead of thinking of a black and white solution where a storage provider is either using users that are known to the underlying os or not we should implement the storing of metadata / share information as acls as an optional step when the user is known. We need to be able to store shares for unknown users / guest accounts anyway. Instead of trying to find a way how we can create guest accounts on the fly so they get a non colliding userid in the filesystem (by adding them to an ldap server under our control) we should thing about that being the primary use case. If the user happens to be available in the os we can use a tighter filesystem integration and actuallly store eg. ACLs.

We need to try hard to not recreate a situation where eg acls and non native acls need to be kept in sync.

Also some maintenance considerations:

  • How can we convert a system user to a virtual user and back?
  • How can we split share metadata in acls and additional metadata?

for the ownCloud storage provider I started with putting NFSv4 like ACLs into the user namespace of extended attributes. This can be an option when trying to store share permissions for virtual users (that should be identified by a uuid with a seperate lookup service, maybe based on scim).

note: the user namespace of extended attributes can be edited by any user with write permissions. the system namespace might be an option if only root should be able change the attributes. for eos cern plans to implement a dedicated set of extended attributes that users cannot control.

tbc

Elasticsearch for searching metadata

to provide search capabilities we need to add elasticsearch as part of the new architecture so storage providers can use it instead of having to walk down the file hierarchy recursively.

next step is to schedule this. AFAICT we need a new api in the CS3 storage provider or a dedicated service, similar to the data provider.

Add metadata cache and propagation strategy on s3

To quickly answer which files changed we need to have an mtime and etag for directories.
for s3 we cannot store matadata for keys that represent directories, because they get lost when adding a key to the prefix ... at least with minio that is the case.
for local storage that supports extended attributes we can store the etag as an extended attribute. for local and s3 we need to do directory size accounting.

to enable stateless sync mtime, etag and size need to be propagated up the tree. The data needs to be stored in the storage for persistence. A cache on top can then be used to improve query speed.

This is related to being able to set arbitrary properties: https://github.com/owncloud/nexus/issues/28 not all s3 implementations allow metadata (minio does not)

So a storage needs a metadata persistence strategy / implementation? Hm, what is the cs3 api for this? AFAICT it is implicit. When executing PROPFINDS sync with the desktop clients will work if the etag changes ...

What about a propagation strategy? sync? async?

  • sync only for litmus?
  • async only for owncloud clients?
  • what should the default be?
    • when switching to cs3 directly, any clients MUST default to async!

Tagging is modeled as a different service in cs3: https://github.com/cernbox/cs3apis/blob/master/cs3/tag/v0alpha/tag.proto AFAICT it needs an update to use CS3 References instead of filename strings.

As a cache a k/v store like https://github.com/dgraph-io/badger makes sense. Can we split the actual storage metadata from blob storage? That is kind of what is necessary for s3 if we were to use it exclusively, anyway. For now implement in local and s3, then extract common pieces?

Keep track of previous userids

The CS3 API is using a so called scoped identifier to distinguish users. The UserId consists of an idp and an opaqueid. Which maps to the openid iss and sub, the only thing the oidc spec says that can be used to identify a user.

I have to disagree.

The oidc core spec uses URLs as identifiers for the iss. While the combination of sub@iss is described as unique, in reality the iss can change easily, eg. in case of a rename of the company due to a merge or a legal issue. We are already seeing that at customers.

There is a related spec from the MODRNA WG: OpenID Connect Account Porting covers migrating a user from one idp to another. AFAICT the spec fails to capture the scenario of multiple migrations. It expects the two idps to do an active handover and be cooperating. The user might still loose access to his account at a relying party, such as OCIS, in case he can no longer log in with his old idp.

AFAICT an openid relying party, such as reva or ocis, should keep a history of sub@iss userids. We can use it to check grants in storage providers, that will still work when a user has a new sub@iss because the storage provider can also check previous sub@iss entries in the id history.

It will also allow us to do a clean migration of users between instances. Or when turning a guest account into an internal user.

There is a security impact: an attacker can gain access to a user by adding a sub@iss combination of another user to his user id history. That prevents this mechanism to be used for a user triggered im / export, unless we use signatures and private keys, which then need to be stored in a secure way, and need to be tied to the web of trust ... urgh ... nomadic identities basically. There is hubzilla, W3C DIDs and a great discussion with a lot of insight into pros and cons. We can limit adding previous userids to a user history to the admin for now.

This is btw how MS does it in their active directory and with the SID and sidHistory ldap properties. They also have a kb article on inter forest sidhistory migration

The sid history in AD should be cleaned up, which can be done by updating all grants with the new sub@iss value. See Sneaky Active Directory Persistence #14: SID History. The distributed nature of ocis make this a lot harder.

Document different deployments

nexus should not only allow setting up

  • a developer environment with make future but also
  • a production environment with

The idea is to give admins a recommended way to run ownCloud in production, including

  • high availability,
  • backup
  • upgrades
  • monitoring and
  • incident reporting

[Extension] thumbnails

while thumbnails should be a separate service it needs to respect the access permissions, otherwise it might leak information.

oCIS Extension based on #54

We can

  • use a content hash of the original file and the thumbnail size for the name of the thumbnail, render it on the fly, cache it on the server side and in the browser. If you don't already have the content hash you can only brute force thumbnails. Without authentication an attacker might be able to iterate over known hashes, which might leak compromising information.
  • optionally, to increase security, check if the user is logged in
  • optionally, to increase security even further, check file permissions and deny thumbnail access if the user has no access.

They can be implemented incrementally.

add `make demo`

  • use production grade docker containers
  • in addition to reva, phoenix, ldap, konnectd and eos
    • start prometheus
    • start jaeger for opentracing
  • run a ui test in the users browser?

[reva] list incoming shares

curl 'https://phoenix.owncloud.com/ocs/v2.php/apps/files_sharing//api/v1/shares?format=json&shared_with_me=true&state=all&include_tags=true&format=json' \
-H 'pragma: no-cache' \
-H 'cookie: oc_sessionPassphrase=yODxGWYV1Lj2YVhEaZNBAFytjKBlvLhtW2Gg63L5o3fwyM1neQexQ93%2FtMPkk3v%2BbViPNcs7%2BErTXRI50b0uaOc7anFJRFPjjUADrb7NG%2BMEGJrl8231MA3zyc3dRK2q; ocmru40aixqr=jguuule56tpt2ojnvtvf6g7c3c' \
-H 'accept-encoding: gzip, deflate, br' \
-H 'accept-language: en-US,en;q=0.9,de;q=0.8' \
-H 'user-agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.110 Safari/537.36' \
-H 'ocs-apirequest: true' \
-H 'accept: */*' \
-H 'cache-control: no-cache' \
-H 'authority: phoenix.owncloud.com' \
-H 'authorization: Bearer xRjL4AeJlsbQjdJsEt1QMbPWRhskMTSlmrIVkEUEiECkhkCvtc46BCuJcI6dqIIO' \
-H 'referer: https://phoenix.owncloud.com/custom/phoenix/index.html' \
-H 'x-request-id: f03bf6a3-fd8c-4d42-a84e-8ed175dbab8d' \
--compressed | jq
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  1259  100  1259    0     0   5777      0 --:--:-- --:--:-- --:--:--  5801
{
  "ocs": {
    "meta": {
      "status": "ok",
      "statuscode": 200,
      "message": null,
      "totalitems": "",
      "itemsperpage": ""
    },
    "data": [
      {
        "id": "15",
        "share_type": 0,
        "uid_owner": "admin",
        "displayname_owner": "admin",
        "permissions": 15,
        "stime": 1559734869,
        "parent": null,
        "expiration": null,
        "token": null,
        "uid_file_owner": "admin",
        "displayname_file_owner": "admin",
        "state": 0,
        "path": "/Photos-from-admin",
        "item_type": "folder",
        "mimetype": "httpd/unix-directory",
        "storage_id": "shared::/Photos-from-admin",
        "storage": 2,
        "item_source": 9,
        "file_source": 9,
        "file_parent": 22,
        "file_target": "/Photos-from-admin",
        "share_with": "demo",
        "share_with_displayname": "demo",
        "share_with_additional_info": "demo",
        "mail_send": 0,
        "attributes": null
      },
      {
        "id": "18",
        "share_type": 0,
        "uid_owner": "admin",
        "displayname_owner": "admin",
        "permissions": 3,
        "stime": 1559737113,
        "parent": null,
        "expiration": null,
        "token": null,
        "uid_file_owner": "admin",
        "displayname_file_owner": "admin",
        "state": 0,
        "path": "/Mini LED Display.dll",
        "item_type": "file",
        "mimetype": "application/octet-stream",
        "storage_id": "shared::/Mini LED Display.dll",
        "storage": 2,
        "item_source": 220,
        "file_source": 220,
        "file_parent": 22,
        "file_target": "/Mini LED Display.dll",
        "share_with": "demo",
        "share_with_displayname": "demo",
        "share_with_additional_info": "demo",
        "mail_send": 0,
        "attributes": null
      }
    ]
  }
}

Feedback from AARNet

Overview

We now have an almost working deployment of OCIS, here are some notes on what is not working/missing

Environment

Phoenix 0.2.3
Reva from github Nov 7, 2019 e56ca76b99b163e8d27dad2630bf415fae73c16a

These are the things that currently do not work for us:

  • Sharing buttons are missing in the UI
  • Navigating into a sub folder does not work Fixed
  • Media view can not get to /remote.php/dav/public-files//<>/image.png?stuff
  • example toml files do not work
    (FIXED fix configuration examples in separate folder by madsi1m · Pull Request #351 · cs3org/reva · GitHub)
  • /ocs/v2.php/apps/notifications/api/v1/notifications?format=json still misssing 404
  • Trash bin does not work
  • Favorites does not work
  • Media viewer does not recognise mp4 as media
  • Deleteing a file is broken
  • Overwrite a file is broken

These are the things that are currently missing:

  • We authenticate via federated (almost) openid connect compatible end point. In the current model OCIS makes the assumption that the user must exist in the "basic auth". We need a way to create a user on successful login. At the moment it seems OCIS is smart enough to create missing user folders on first login but not create the user.
  • Sync client compatibility, I am not able to add ocdav/frontend to the sync client as it tells me the "Host requires authentication"

reva: properties support

-> running `props':
 0. init.................. pass
 1. begin................. pass
 2. propfind_invalid...... pass
 3. propfind_invalid2..... pass
 4. propfind_d0........... pass
 5. propinit.............. pass
 6. propset............... FAIL (PROPPATCH on `/remote.php/webdav/litmus/prop': 501 Not Implemented)
 7. propget............... SKIPPED
 8. propextended.......... pass
 9. propmove.............. SKIPPED
10. propget............... SKIPPED
11. propdeletes........... SKIPPED
12. propget............... SKIPPED
13. propreplace........... SKIPPED
14. propget............... SKIPPED
15. propnullns............ SKIPPED
16. propget............... SKIPPED
17. propremoveset......... SKIPPED
18. propget............... SKIPPED
19. propsetremove......... SKIPPED
20. propget............... SKIPPED
21. propvalnspace......... SKIPPED
22. propwformed........... pass
23. propinit.............. pass
24. propmanyns............ FAIL (PROPPATCH on `/remote.php/webdav/litmus/prop': 501 Not Implemented)
25. propget............... FAIL (No value given for property {http://example.com/kappa}somename)
26. propcleanup........... pass
27. finish................ pass
-> 14 tests were skipped.
<- summary for `props': of 14 tests run: 11 passed, 3 failed. 78.6%

needs cs3 api changes, unless we want to store properties in a different service

Migration needs to be able to set metadata without propagation

New migration approach

After the architecure changed from oc10 to ocis we can now move whole spaces from the old storage provider with the owncloudsql driver to a new storage provider with a different driver, eg. ofqs or cephfs. This operation also needs to be abe to import metadata witout causing a propagation. Same problem as with the old approach below.

Old migration approach

The import reads a json file line by line and updates the metadata like etags and shares via the CS3 API. Unfortunately, the order of files and folders might cause a files etag to overwrite a previously set etag for a folder. To solve this properly it would be great if the SetArbitraryMetadata call had a nopropagation flag.

Feedback from CERN, followup ocConf 2019

Overview

How shares are organized and synchronized (folder can be moved around etc). Looking at "shared with me" list now: it's too long and cumbersome to manage for some users. This needs a proper product design and scalability discussion, being able to filter out by different fields, hidding and unhidding the shares, filtering by user, group or ocm types.
Also we'd like that from the web UI we can decide if this share is synced to the mobile or to the desktop client automatically as we have the information about what devices the user is using.

For phoenix, we'd like that becomes a platform which we can cleanly extend with our own views or plugins (that may later be merged with upstream if of general interest). It is very important to keep consistent the product aspects (such as sharing) across different devices interfaces (phoenix & mobile for example)? There will always be differences due to specifics of each device but some concepts should be the same (and documented somewhere, it does not belong to the API really).

Related to CS3 APIs, for Q4 2019, CERN will provide a prototype of the grpc connection for Phoenix and ownCloud one for the desktop sync client.

Accepting shares

  • cern: accepting shares is possible from the web interface, missing in mobile and desktop clients
  • oc: plans to have it done also for mobile and desktop. The new IOS app already allows that.

File Metadata propagation for the desktop sync client

  • cern: we'd like to sync execution bit and Mac Os labels for example. To be enabled by exposing a new capability.
  • oc: understand first how the CS3 APIs will fit this purpose. cern to create an issue to follow up.

Syncing unreadable folders

  • cern: oc expects a tree with read-write to be completely read-write, but sometimes there will be folders with more restrictive permissions, like read, therefore the sync client will abort the sync for the entire folder. TODO(labkode): find isue number. TODO(cern): test behaviour with 2.6.0-rc2

Disable user sharing for individual files

  • cern: we'd like to configure clients (mobile, desktop, web) to avoid creating shares to users on individual files.
  • oc: plan to do it by offering a new API that will be consumed by clients to render the correct dialogs for sharing.

Context menu for applications in desktop sync client

  • cern: the desktop sync client offers an OS-integrated context menu to share. We'd like to add our own entries to enable "open in Office" for example.
  • oc: won't add support for context menus. We'are going in the direction to open directly on the oc UI, for example, by clicking in the desktop show me versions, it will open a dialog with a browser inside rendering the versions tabs found in the oc UI. Idea is to extend this behaviour to applications, so "Open in Office" will open the file also in the browser.
    OCIS could provide views specially targeted to the desktop client making the integratin with the OS more native.

Old chunking algorithm

  • cern: still relying on old chunking.
  • oc: still in 2.7 but no support if something breaks.

Request ID header

  • oc: sync client currently sends a request ID in the X-Request-Id header
  • cern: it can also be interesting if client could send extra information by using OpenCensus instrumentation (like start of the request seen from sync client, version number, ...) that will give us lot of insights from a client perspective rather than server-side.

UI Private link

  • cern: we'd like to allow people to just copy/paste the URL to access a shared resource.
  • oc: plan to merge private link and public link into one concept. Maier as product manager liked the idea of people just copy/pasting urls for sharing.

Configuration management for sync client: major priority for cern

  • cern: we want to dynamically provision users with configuration (hidden files, enabled features, ...). Currently we can configure at build time some constraints but that is not enough, we'd like to have it dynamically per user and easily changeable by and administrator without re-building a branded client and distributing it.
  • oc: current possibility is statically-branded builds with ownbrander and MDM-style deployment for mobile. There are some plans for user profiles but they will come with OCIS and new APIs that will allow us to achieve such functionality. Michael with follow up with configuration team as he wants to converge all configuration options into a single namespace to brand all clients the same way. Once that is there, the dynamic configuration will follow.

Recall of an user desktop sync client configuration

  • cern: we'd like to recall a user configuration as an administrator to not ask the user to find hidden special files (.sqlite, .ocsync, ...) to send into a ticket.
  • oc: we currently have a way where a user can be asked to create a debug archive that currently contains only the log information. cern will send list of files to be included in the archive. The generated archive (zip) is generated and the user chooses where to store it.
  • cern: it would be very helpful if the archive is stored automatically in the sync folder so the archive arrives to the administrator automatically.
  • oc: Hannha mentioned the possibily to put a magic file into the synced folder to trigger that behaviour. To be follow up in a subsequent meeting. The feature of pressing F12 to get the logs has been disabled, the user has now the power to generate such archive at prefered location.

Symlinks synchronization

  • cern: we'd like to have the option to also synchronize a symbolic link
  • oc: it will be possible with the integration of ocis and the cs3apis

Shared journal for desktop sync client

  • cern: the current sync client stores one sync journal (logs, sqlitedb) per sync folder pair, having one is desired for debugging done by administrators.
  • oc: the new archive can get all those files.

Add a new sync folder pair desktop sync client wizard

  • cern: the current wizard is confusing for users as EOS evaluates permissions at a folder level not in a hirarchy, this causes some propfind requests to fail.
  • oc: cern to send detailed issue.

Update of Windows client

  • cern: update to 2.5* requires moving first to 2.4.3
  • oc: make sure cern has autoupdater on latest version

Virtual filesystem: major priority for cern

  • cern: we'd like that this feature reaches a production state.
  • oc: we have now a new version integrated natively with the Windows operative system like Dropbox and Google Drive does using native "cloud" APIs. We plan to the same for Mac OS Catalina. For Linux OS there is no clear path yet but we also want to offer this integration. Suggest to try with 2.6.0-rc2 and give feedback.
  • cern : we'll start testing it.
  • cern: tests so far didn't manange to connect to the production cluster by current capabilities.

Sync client syncs only UTF-8

  • cern: sync client converts latin and other encodings back to utf-8
  • oc: the user will could be notified that the file has been re-encoded to utf-8

Mobile client developments

  • oc: working on a new IOS application with better integration thans to IOS "FileProvider". Android application
    is getting re-implemented with a new architecture.
  • cern: will send issues found by users to see if they have been addressed. A requested feature is to have automatic synced folders/files, like a desktop sync client in the mobile.
  • oc: we plan to provide this functionality by having the same sync algorithm on the client.

Collaboration on Phoenix

  • oc: the UX needs to be improved. Efforts were done on functionality first. Focusing on improving the interface now.

  • cern: we have the concept of project spaces, that are collaborative spaces where people can work together.

  • oc: we also want to introduce such concept as "space rooms"

  • cern: that is a great idea, however the APIs for Phoenix need to be flexible enough to allow adding arbitrary new storage spaces without having to hack code. For example we exposed at some point in the past an application called "Global EOS" that allowed access to arbitrary storage pools.

  • cern: we want to allow users to copy/paste the url to access shared spaces. The motivation behind is that all our end-user targeted storage systems (DFS, AFS, EOS) expose a global path that allows people to copy/paste a path and gets access to the resource. The Web UI masks this path to the root url (/) having no way to allow such functionality (the workaround was the private link). We'd like that Phoenix brings this possibility either as a configuration option or as main product feature. This changes the semantic (by path) and also exposes parent directory names. This may or may not be desired, depending on context. So a holistic product design needed here as well.

  • oc: Patrick as product manager liked that concept. To be followed up on a subsequent call.

  • cern: the owncloud SDK and Phoenix should bring the possibility to configure the SDK with a different storage basename, currently all operations are mapped to "/", but we might want to send to "/eos/user/g/gonzalhu", that way the breadcrumbs for the user will be shown in the UI, that is a pre-requisite to allow copy/paste urls.

  • oc: Thomas and Vince understood the requirement and it should be possible to do so in the ownCloud SDK, they will follow up on a github issue. CERN will provide feedback by implementing the CS3APIs connector to the ownCloud SDK.

  • oc: some metadata attributes are prefixed with oc namespaces, like "https://owncloud/ns/fileid", how they will fit in the CS3APIs calls?

  • cern: the same prefix will be used in the CS3APIs when they are vendor-specific, so there should be no problem. For more general attributes we'll use some exisitng date dictionaries. cern to check with digital repositories at CERN for attribute classification.

Collaboration on OCIS/REVA

  • oc: released a first version of the OCIS setup where services are split in various repositories: ocis, ocis-webdav, ocis-phoenix, ocis-reva. Currenlty not using any micro-services framework. Will like to move to go-kit framewok as micro is becoming more comercial.

  • cern: agreed that the choice of framework for OCIS should be based on the sustenaibiliy of the framework for long term and avoiding projects led by single individuals.

  • cern: the current ocis setup is enough to run Reva alongside oc components as the connection between ocis-reva services and ocis-webdav is done by using the CS3 APIS. Current setup allows for evolution of Reva and OCIS independently while ensuring compatibily of using Ocis with any version of Reva. However, it could also be interesting to see if Reva could be backwards compatible with internal OCIs services (pkg commands).

  • oc: that could be possible if Reva uses the same framework and components as ocis. Thomas could prototype it.

  • cern: like the idea, however needs further discussion with cernbox team and follow up in a subsequent call. Worry that multiple repository approach will lead to code duplication and API checks are spread across services.

  • oc: code duplication will be there but relying on code generation should reduce the burden and for the API check a common library could be created.

  • cern: would like to follow the approach done with Restic for documentation: code lives alongside documentation and changelog of the software also. Also we'd like to tag a version and to have an automatic release.

  • oc: should be also doable in Drone and using the calens tool.

  • cern: done, thanks to Thomas for instructions.

CS3 APIS

  • cern: would like the cs3 apis to be compiled and publish to various languages.

  • oc: should be possible to achieve that by using Drone as the CI/CD service.

  • cern: done, thanks to Thomas for providing us intructions on how to do the setup.

  • oc: would be nice to have a proposal system for the CS3 APIs. In Javascript world there is TC39 for such discussions.

  • cern: would be make a lot of sense once the CS3APIs gets to a stable state and start to be used elsewhere. The current contributor guidelines should suffice for until then

  • oc: not many documentation of the CS3APIS

  • cern: working on a white paper on the CS3APIs to present the on CHEP, further documentation will follow alongside Reva.

in place migration

Using the oc10 data exporter app and the reva import command we can migrate users in batch.
For in place migration of an instance we need to think about a way how to mix users from reva and oc10 behind the same domain.

Options:

  • Federated shares between nodes behind a reverse proxy?
  • reva as proxy in front of owncloud?

Notifications endpoint

currently cs3 has no notification mechanisms, but we should plan and collect ideas so we can collaborate on this at the conference, eg:

  • long polling?
    • can be done efficiently with golang
    • http vs grpc
  • do we need a CS3 endpoint?
  • needs to aggregate changes from various resources
    • seperate service with it's own cache
    • storage providers the user has access to
    • sharing events
    • actvities app
  • document how to disable the polling using capabilities

Update capabilities

Describe the bug

Currently, the capabilities are configured manually with defaults set to outdatet versions. We need a better concept how to generate the list of capabilities and remove old ones.

Steps to reproduce

curl the capabilities endpoint

Expected behavior

An up to date list of capabilities, dynamically generated by the current live configuration of all extensions.

Actual behavior

Static list of capabilities that does not reflect the truth.

Example

TUS capability ... which is actually not a global config but a per folder thing ...

Phoenix does not refresh the auth token

With the example/separate configuration for phoenix we use the auth code flow.
Over night my auth token expired and I can see the notification requests with a 401 Unauthorized.

curl 'http://localhost:20080/ocs/v2.php/apps/notifications/api/v1/notifications?format=json' -H 'Sec-Fetch-Mode: cors' -H 'Referer: http://localhost:8300/' -H 'X-Requested-With: XMLHttpRequest' -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/77.0.3865.120 Safari/537.36' -H 'authorization: Bearer 85-r1nKx4Kq6Sa-x4XvLTXtY25MJDOn8kfmXAN_Dk2g.-slGakJdyR6pMQi0t03fPgXkN169JMRJLR7c2TEgpeM' -H 'OCS-APIREQUEST: true' --compressed -v
*   Trying ::1:20080...
* TCP_NODELAY set
* Connected to localhost (::1) port 20080 (#0)
> GET /ocs/v2.php/apps/notifications/api/v1/notifications?format=json HTTP/1.1
> Host: localhost:20080
> Accept: */*
> Accept-Encoding: deflate, gzip
> Sec-Fetch-Mode: cors
> Referer: http://localhost:8300/
> X-Requested-With: XMLHttpRequest
> User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/77.0.3865.120 Safari/537.36
> authorization: Bearer 85-r1nKx4Kq6Sa-x4XvLTXtY25MJDOn8kfmXAN_Dk2g.-slGakJdyR6pMQi0t03fPgXkN169JMRJLR7c2TEgpeM
> OCS-APIREQUEST: true
> 
* Mark bundle as not supporting multiuse
< HTTP/1.1 401 Unauthorized
< Date: Tue, 12 Nov 2019 09:08:13 GMT
< Content-Length: 0
< 
* Connection #0 to host localhost left intact

Hmmm no in the browser the OPTIONS request fails

curl -X OPTIONS 'http://localhost:20080/ocs/v2.php/apps/notifications/api/v1/notifications?format=json' -H 'Sec-Fetch-Mode: cors' -H 'Referer: http://localhost:8300/' -H 'X-Requested-With: XMLHttpRequest' -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/77.0.3865.120 Safari/537.36' -H 'authorization: Bearer 85-r1nKx4Kq6Sa-x4XvLTXtY25MJDOn8kfmXAN_Dk2g.-slGakJdyR6pMQi0t03fPgXkN169JMRJLR7c2TEgpeM' -H 'OCS-APIREQUEST: true' --compressed -v
*   Trying ::1:20080...
* TCP_NODELAY set
* Connected to localhost (::1) port 20080 (#0)
> OPTIONS /ocs/v2.php/apps/notifications/api/v1/notifications?format=json HTTP/1.1
> Host: localhost:20080
> Accept: */*
> Accept-Encoding: deflate, gzip
> Sec-Fetch-Mode: cors
> Referer: http://localhost:8300/
> X-Requested-With: XMLHttpRequest
> User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/77.0.3865.120 Safari/537.36
> authorization: Bearer 85-r1nKx4Kq6Sa-x4XvLTXtY25MJDOn8kfmXAN_Dk2g.-slGakJdyR6pMQi0t03fPgXkN169JMRJLR7c2TEgpeM
> OCS-APIREQUEST: true
> 
* Mark bundle as not supporting multiuse
< HTTP/1.1 404 Not Found
< Content-Type: application/json
< Vary: Origin
< Date: Tue, 12 Nov 2019 09:44:53 GMT
< Content-Length: 74
< 
* Connection #0 to host localhost left intact
{"ocs":{"meta":{"status":"error","statuscode":998,"message":"Not found"}}}%

but that is a 404 ... and contains a body ... so it is not really an OPTIONS response, but just not handled ... urgh.

I'll fix that first and then see if phoenix tries to get a new auth token when the old one expires.

[reva] favorites

curl 'https://phoenix.owncloud.com/remote.php/dav/files/demo/' -X REPORT \
-H 'origin: https://phoenix.owncloud.com' \
-H 'accept-encoding: gzip, deflate, br' \
-H 'accept-language: en-US,en;q=0.9,de;q=0.8' \
-H 'authorization: Bearer xRjL4AeJlsbQjdJsEt1QMbPWRhskMTSlmrIVkEUEiECkhkCvtc46BCuJcI6dqIIO' \
-H 'ocs-apirequest: true' \
-H 'cookie: oc_sessionPassphrase=yODxGWYV1Lj2YVhEaZNBAFytjKBlvLhtW2Gg63L5o3fwyM1neQexQ93%2FtMPkk3v%2BbViPNcs7%2BErTXRI50b0uaOc7anFJRFPjjUADrb7NG%2BMEGJrl8231MA3zyc3dRK2q; ocmru40aixqr=jguuule56tpt2ojnvtvf6g7c3c' \
-H 'x-request-id: 12647d70-9312-4db7-af13-aadcb558ec3f' \
-H 'pragma: no-cache' \
-H 'user-agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.110 Safari/537.36' \
-H 'content-type: application/xml; charset=UTF-8' \
-H 'accept: */*' \
-H 'cache-control: no-cache' \
-H 'authority: phoenix.owncloud.com' \
-H 'referer: https://phoenix.owncloud.com/custom/phoenix/index.html' --data-binary $'<?xml version="1.0"?>\n<oc:filter-files  xmlns:d="DAV:" xmlns:oc="http://owncloud.org/ns">\n  <d:prop>\n    <oc:permissions />\n    <oc:favorite />\n    <oc:fileid />\n    <oc:owner-id />\n    <oc:owner-display-name />\n    <oc:privatelink />\n    <d:getcontentlength />\n    <oc:size />\n    <d:getlastmodified />\n    <d:getetag />\n    <d:resourcetype />\n  </d:prop>\n<oc:filter-rules>\n<oc:favorite>1</oc:favorite>\n</oc:filter-rules>\n</oc:filter-files>' --compressed | xml_pp
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--   100  3661  100  3220  100   441  10339   1416 --:--:-- --:--:-- --:--:-- 10100  3661  100  3220  100   441  10328   1414 --:--:-- --:--:-- --:--:-- 10320
<?xml version="1.0"?>
<d:multistatus xmlns:d="DAV:" xmlns:oc="http://owncloud.org/ns" xmlns:s="http://sabredav.org/ns">
  <d:response>
    <d:href>/remote.php/dav/files/demo/ok</d:href>
    <d:propstat>
      <d:prop>
        <oc:permissions>RDNVW</oc:permissions>
        <oc:favorite>1</oc:favorite>
        <oc:fileid>122</oc:fileid>
        <oc:owner-id>demo</oc:owner-id>
        <oc:owner-display-name>demo</oc:owner-display-name>
        <oc:privatelink>https://phoenix.owncloud.com/f/122</oc:privatelink>
        <d:getcontentlength>0</d:getcontentlength>
        <oc:size>0</oc:size>
        <d:getlastmodified>Thu, 25 Apr 2019 11:45:25 GMT</d:getlastmodified>
        <d:getetag>&quot;5b015f9055a56e89f4a3702ffd1988d9&quot;</d:getetag>
        <d:resourcetype/>
      </d:prop>
      <d:status>HTTP/1.1 200 OK</d:status>
    </d:propstat>
  </d:response>
  <d:response>
    <d:href>/remote.php/dav/files/demo/eula.lnk</d:href>
    <d:propstat>
      <d:prop>
        <oc:permissions>RDNVW</oc:permissions>
        <oc:favorite>1</oc:favorite>
        <oc:fileid>382</oc:fileid>
        <oc:owner-id>demo</oc:owner-id>
        <oc:owner-display-name>demo</oc:owner-display-name>
        <oc:privatelink>https://phoenix.owncloud.com/f/382</oc:privatelink>
        <d:getcontentlength>830</d:getcontentlength>
        <oc:size>830</oc:size>
        <d:getlastmodified>Wed, 12 Jun 2019 12:39:07 GMT</d:getlastmodified>
        <d:getetag>&quot;522e5c8b8d91831b82bc8f671af2884a&quot;</d:getetag>
        <d:resourcetype/>
      </d:prop>
      <d:status>HTTP/1.1 200 OK</d:status>
    </d:propstat>
  </d:response>
  <d:response>
    <d:href>/remote.php/dav/files/demo/test.md</d:href>
    <d:propstat>
      <d:prop>
        <oc:permissions>RDNVW</oc:permissions>
        <oc:favorite>1</oc:favorite>
        <oc:fileid>383</oc:fileid>
        <oc:owner-id>demo</oc:owner-id>
        <oc:owner-display-name>demo</oc:owner-display-name>
        <oc:privatelink>https://phoenix.owncloud.com/f/383</oc:privatelink>
        <d:getcontentlength>305</d:getcontentlength>
        <oc:size>305</oc:size>
        <d:getlastmodified>Sat, 15 Jun 2019 08:02:30 GMT</d:getlastmodified>
        <d:getetag>&quot;a7d5ca36d47abe586cfaf4622c064e3c&quot;</d:getetag>
        <d:resourcetype/>
      </d:prop>
      <d:status>HTTP/1.1 200 OK</d:status>
    </d:propstat>
  </d:response>
  <d:response>
    <d:href>/remote.php/dav/files/demo/fd5526f556f445a4f54afe2e2ce7fe26.jpg</d:href>
    <d:propstat>
      <d:prop>
        <oc:permissions>RDNVW</oc:permissions>
        <oc:favorite>1</oc:favorite>
        <oc:fileid>414</oc:fileid>
        <oc:owner-id>demo</oc:owner-id>
        <oc:owner-display-name>demo</oc:owner-display-name>
        <oc:privatelink>https://phoenix.owncloud.com/f/414</oc:privatelink>
        <d:getcontentlength>50183</d:getcontentlength>
        <oc:size>50183</oc:size>
        <d:getlastmodified>Mon, 24 Jun 2019 15:17:05 GMT</d:getlastmodified>
        <d:getetag>&quot;461b098d039ec7bd3d3ffa4d05a128ab&quot;</d:getetag>
        <d:resourcetype/>
      </d:prop>
      <d:status>HTTP/1.1 200 OK</d:status>
    </d:propstat>
  </d:response>
  <d:response>
    <d:href>/remote.php/dav/files/demo/ok.dms</d:href>
    <d:propstat>
      <d:prop>
        <oc:permissions>RDNVW</oc:permissions>
        <oc:favorite>1</oc:favorite>
        <oc:fileid>449</oc:fileid>
        <oc:owner-id>demo</oc:owner-id>
        <oc:owner-display-name>demo</oc:owner-display-name>
        <oc:privatelink>https://phoenix.owncloud.com/f/449</oc:privatelink>
        <d:getcontentlength>0</d:getcontentlength>
        <oc:size>0</oc:size>
        <d:getlastmodified>Mon, 22 Jul 2019 11:01:00 GMT</d:getlastmodified>
        <d:getetag>&quot;fbe39751824e3923347e9a30765ba58c&quot;</d:getetag>
        <d:resourcetype/>
      </d:prop>
      <d:status>HTTP/1.1 200 OK</d:status>
    </d:propstat>
  </d:response>
</d:multistatus>

add performance measurement

first approach would be to run smashbox or the phoenix ui tests along with a monitoring tool.

  • add prometheus to docker-compose.yml
    • not really needed for make future
    • start only for a make demo that can be used to demo how a production environment would look like?
  • add make performance-metrics
    while prometheus is running
    • run smashbox
    • run the phoenix ui tests
      afterwards print cpu, memory and io graphs as a snapshot for the testing duration.
    • generate PDF with metrics

add testing tools

Since nexus is a repo that ties together several repos to build a development environment we need a way to run a testsuite. I would like to use smashbox or our acceptance tests, but AFAIK both need access to the instance to speed up user creation and cleanup.

AFAICT, we should be able to run the phoenix test suite as described in https://github.com/owncloud/phoenix#run-acceptance-tests

Litmus tests

To ensure basic webdav functionality

API tests

To test corner cases

  • copy api tests from core
  • reimplement steps for nexus / reva / ldap

Phoenix tests

are tracked in owncloud/web#682.

  • add a make acceptance-tests that tests the development instance brought up with make future
    • it can assume the ldap users from owncloudqa to be available

User management

is done via ldap so we can directly use those users and test

  • login and logout
  • file and folder management features: create, list, stat, rename, move, delete
  • file type specific features: edit text files, thumbnail generation, markdown preview ...

Sharing

can be divided in three topics: file and folder management for

  • internal file sharing (let's call this collaboration) with: create, list, stat, rename, move, delete on shared folder, sharing, unsharing, resharing
  • public file sharing with: create, list, stat, rename, move, delete on shared folder, sharing, unsharing
  • federated sharing ... ohhhh, how can we run two instances ... with: create, list, stat, rename, move, delete on shared folder, sharing, unsharing, resharing

Search

External storages

The storage implementation uses a new architecture. For now, webdav access to the whole tree should be possible. In the future clients should look up the correct storage using the broken and then dircetly talk to the responsible storage provider. An external storage is wrapped by a storage provider.

  • Testing file and folder management on external storages ... if we have them

  • a config file containing the list of users and groups to be used for testing so we can run this not only against the make future dev environment but also against any other instance

Cannot build with go 1.11

Readme say Golang 1.11 is required. But running make generate build gives this output:

go get -u github.com/vektah/gorunpkg
go: finding github.com/spf13/viper v1.4.0
go: finding github.com/rs/zerolog v1.15.0
go: finding github.com/owncloud/ocis-ocs v0.0.0-20190905101159-0d7ed8b013a2
go: finding github.com/owncloud/ocis-phoenix v0.0.0-20190905101144-daaf1e6ddb5e
go: finding github.com/spf13/cobra v0.0.5
go: finding github.com/owncloud/ocis-webdav v0.0.0-20190905101156-4d74712e7d46
go: finding github.com/haya14busa/goverage v0.0.0-20180129164344-eec3514a20b5
go: finding github.com/mitchellh/gox v1.0.1
go: finding github.com/ugorji/go v1.1.4
go: finding github.com/gogo/protobuf v1.2.1
go: finding github.com/gorilla/websocket v1.4.0
go: finding github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f
go: finding github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e
go: finding github.com/prometheus/client_golang v0.9.3
go: finding github.com/google/btree v1.0.0
go: finding github.com/jonboulle/clockwork v0.1.0
go: finding github.com/golang/groupcache v0.0.0-20190129154638-5b532d6fd5ef
go: finding github.com/soheilhy/cmux v0.1.4
go: finding github.com/mitchellh/go-homedir v1.1.0
go: finding github.com/inconshreveable/mousetrap v1.0.0
go: finding github.com/hashicorp/go-version v1.0.0
go: finding go.etcd.io/bbolt v1.3.2
go: finding golang.org/x/net v0.0.0-20190522155817-f3200d17e092
go: finding github.com/prometheus/procfs v0.0.0-20190507164030-5867b95ac084
go: finding github.com/prometheus/common v0.4.0
go: finding github.com/tmc/grpc-websocket-proxy v0.0.0-20190109142713-0ad062ec5ee5
go: finding go.uber.org/zap v1.10.0
go: finding golang.org/x/time v0.0.0-20190308202827-9d24e82272b4
go: finding github.com/cpuguy83/go-md2man v1.0.10
go: finding github.com/restic/calens v0.0.0-20190419101620-10f36cb4a529
go: finding go.uber.org/atomic v1.4.0
go: finding golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2
go: finding github.com/beorn7/perks v1.0.0
go: finding github.com/go-logfmt/logfmt v0.4.0
go: finding golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4
go: finding github.com/golang/protobuf v1.3.1
go: finding github.com/coreos/bbolt v1.3.2
go: finding github.com/kisielk/errcheck v1.1.0
go: finding github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90
go: finding github.com/mitchellh/iochan v1.0.0
go: finding github.com/go-stack/stack v1.8.0
go: finding github.com/prometheus/tsdb v0.7.1
go: finding golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a
go: finding github.com/gogo/protobuf v1.1.1
go: finding github.com/julienschmidt/httprouter v1.2.0
go: finding github.com/go-test/deep v1.0.1
go: finding github.com/go-logfmt/logfmt v0.3.0
go: finding github.com/rs/xid v1.2.1
go: finding github.com/prometheus/client_golang v0.9.1
go: finding gopkg.in/yaml.v2 v2.2.1
go: finding github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d
go: finding google.golang.org/grpc v1.21.0
go: finding github.com/pkg/errors v0.8.0
go: finding github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515
go: finding github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc
go: finding github.com/zenazn/goji v0.9.0
go: finding golang.org/x/tools v0.0.0-20180221164845-07fd8470d635
go: finding github.com/spf13/viper v1.3.2
go: finding github.com/go-kit/kit v0.8.0
go: finding gopkg.in/alecthomas/kingpin.v2 v2.2.6
go: finding golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5
go: finding github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf
go: finding github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223
go: finding github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2
go: finding github.com/sirupsen/logrus v1.2.0
go: finding github.com/prometheus/common v0.0.0-20181113130724-41aa239b4cce
go: finding golang.org/x/sys v0.0.0-20181107165924-66b7b1311ac8
go: finding golang.org/x/net v0.0.0-20181114220301-adae6a3d119a
go: finding github.com/oklog/ulid v1.3.1
go: finding github.com/cespare/xxhash v1.1.0
go: finding github.com/russross/blackfriday v1.5.2
go: finding github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954
go: finding github.com/grpc-ecosystem/grpc-gateway v1.9.0
go: finding golang.org/x/tools v0.0.0-20190425163242-31fd60d6bfdc
go: finding golang.org/x/net v0.0.0-20190311183353-d8887717615a
go: finding golang.org/x/tools v0.0.0-20190311212946-11955173bddd
go: finding github.com/google/go-cmp v0.2.0
go: finding golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3
go: finding github.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72
go: finding github.com/konsorten/go-windows-terminal-sequences v1.0.1
go: finding honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099
go: finding github.com/stretchr/objx v0.1.1
go: finding golang.org/x/crypto v0.0.0-20180904163835-0709b304e793
go: finding github.com/OneOfOne/xxhash v1.2.2
go: finding golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33
go: finding golang.org/x/net v0.0.0-20181220203305-927f97764cc3
go: finding github.com/ghodss/yaml v1.0.0
go: finding google.golang.org/grpc v1.19.0
go: finding github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af
go: finding gopkg.in/yaml.v2 v2.0.0-20170812160011-eb3733d160e7
go: finding github.com/kr/pretty v0.1.0
go: finding golang.org/x/sync v0.0.0-20190423024810-112230192c58
go: finding gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127
go: finding gopkg.in/resty.v1 v1.12.0
go: finding github.com/kr/text v0.1.0
go: finding golang.org/x/tools v0.0.0-20190114222345-bf090417da8b
go: finding github.com/kr/pty v1.1.1
go: finding github.com/vektah/gorunpkg latest
go: downloading github.com/vektah/gorunpkg v0.0.0-20190126024156-2aeb42363e48
go: downloading github.com/owncloud/ocis-phoenix v0.0.0-20190905101144-daaf1e6ddb5e
go: downloading github.com/owncloud/ocis-webdav v0.0.0-20190905101156-4d74712e7d46
go: downloading github.com/spf13/cobra v0.0.5
go: downloading github.com/owncloud/ocis-ocs v0.0.0-20190905101159-0d7ed8b013a2
go: downloading github.com/spf13/viper v1.4.0
go: downloading github.com/rs/zerolog v1.15.0
go: downloading github.com/fsnotify/fsnotify v1.4.7
go: downloading github.com/hashicorp/hcl v1.0.0
go: downloading gopkg.in/yaml.v2 v2.2.2
go: downloading github.com/spf13/jwalterweatherman v1.0.0
go: downloading github.com/spf13/cast v1.3.0
go: downloading github.com/magiconair/properties v1.8.0
go: downloading golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a
go: downloading github.com/spf13/pflag v1.0.3
go: downloading github.com/spf13/afero v1.1.2
go: downloading github.com/pelletier/go-toml v1.2.0
go generate github.com/owncloud/ocis/cmd/ocis github.com/owncloud/ocis/pkg/command github.com/owncloud/ocis/pkg/version
go build -i -v -tags '' -ldflags '-s -w -X "github.com/owncloud/ocis/pkg/version.String=44e52d0" -X "github.com/owncloud/ocis/pkg/version.Date=20191001"' -o bin/ocis ./cmd/ocis
github.com/owncloud/ocis-ocs/pkg/version
go build github.com/owncloud/ocis-ocs/pkg/version: module requires Go 1.12
github.com/hashicorp/hcl/hcl/strconv
golang_org/x/net/dns/dnsmessage
golang_org/x/crypto/cryptobyte/asn1
golang_org/x/crypto/curve25519
github.com/owncloud/ocis-phoenix/pkg/version
go build github.com/owncloud/ocis-phoenix/pkg/version: module requires Go 1.12
github.com/owncloud/ocis-webdav/pkg/version
go build github.com/owncloud/ocis-webdav/pkg/version: module requires Go 1.12
github.com/owncloud/ocis/pkg/version
go build github.com/owncloud/ocis/pkg/version: module requires Go 1.12
golang.org/x/sys/unix
golang_org/x/crypto/poly1305
golang_org/x/net/route
github.com/hashicorp/hcl/hcl/token
golang_org/x/crypto/cryptobyte
golang_org/x/crypto/internal/chacha20
golang_org/x/text/transform
github.com/hashicorp/hcl/hcl/ast
github.com/hashicorp/hcl/hcl/scanner
github.com/hashicorp/hcl/json/token
github.com/hashicorp/hcl/json/scanner
golang_org/x/text/unicode/bidi
github.com/hashicorp/hcl/hcl/parser
github.com/hashicorp/hcl/json/parser
github.com/hashicorp/hcl/hcl/printer
golang_org/x/crypto/chacha20poly1305
github.com/hashicorp/hcl
golang_org/x/text/unicode/norm
golang_org/x/net/http2/hpack
github.com/pelletier/go-toml
github.com/spf13/afero/mem
golang.org/x/text/transform
net
golang_org/x/text/secure/bidirule
github.com/spf13/cast
github.com/spf13/jwalterweatherman
golang.org/x/text/unicode/norm
gopkg.in/yaml.v2
github.com/fsnotify/fsnotify
golang_org/x/net/idna
github.com/rs/zerolog/internal/json
net/textproto
github.com/mitchellh/mapstructure
golang_org/x/net/http/httpproxy
github.com/spf13/pflag
crypto/x509
golang_org/x/net/http/httpguts
github.com/rs/zerolog
github.com/rs/zerolog/log
github.com/spf13/cobra
go build github.com/spf13/cobra: module requires Go 1.12
crypto/tls
net/http/httptrace
net/http
github.com/magiconair/properties
github.com/spf13/afero
github.com/spf13/viper
make: *** [Makefile:91: bin/ocis] Error 1

User personal settings page

For many apps, we need a page where the user can specify some personal settings.

Some examples:

  • mail notification settings
  • whether to automatically accept incoming local shares
  • oauth2 app list
  • session management
  • setting avatar

I think not all of them would apply to Phoenix, but for those that do, we need a place to manage this.

For now the most obvious place would be under the "Account" page, add these all as sections.
This would also require a list of quick links to be able to jump to the sections (priority depends how many settings we'll have at first)

A less obvious place would be to display those settings closer to the feature. For example sharing settings would need to appear somewhere in the files app.

discuss: everything is a resource?

Straigt from gitter: https://gitter.im/cs3org/REVA?at=5d722b48ae44a841248d3e73

Hm thinking about the list of incoming, outgoing, trashed and favorite files (and folders) they all look like special storages to me ... the difference is that they only have a single collection with references to other resources in it ...
they could be known to the storage registry ... which presents the list of storages a user has access to ... one of them being the current user home (or his private files) under /home, incoming shares as /shares/incoming, outgoing shares as /shares/outgoing, accepted shares as /shares/accepted ..., trash as /trash, favorites as /favorites, tagged files as /tagged///.../

Jörn Friedrich Dreyer @butonic 11:53
but they actually all just render a collection of references
this is what we did when moving the different custom api calls under the dav endpoint in owncloud
same for versions ... there we actually introduced a meta namespace that we could use for other things as well. it uses /dav/meta//v to list versions end uses /dav/meta//v/ to reference specific versions

Jörn Friedrich Dreyer @butonic 11:58
hm
probably worth discussing at the conference

Hugo Labrador @labkode 13:47
I see a concern in your model: you are tyring to model every piece of the CS3 apis to webdav resources

Jörn Friedrich Dreyer @butonic 14:17
no to storage providers ... the question is if we want to expose metadata as a filesystem or if we want to ad dedicated apis for them

Hugo Labrador @labkode 15:27
the cs3 apis expose APIS for trashbin and versioning
you are mapping those to webdav using custom paths: files-trashbin, files-versions
am I right?

Jörn Friedrich Dreyer @butonic 15:27
yes ...
it pulls the metadata into the virtual filesystem tree

Hugo Labrador @labkode 15:27
I don't think for long term that will best approach for you to follow
you are exposing these functionality as webdav resources, that people can "sync"

Jörn Friedrich Dreyer @butonic 15:28
no

Hugo Labrador @labkode 15:28
i.e they are normal webdav endpoints
if I do a propfind from Finder there, I will get the files locally
same if you want to expose favourites or other infomation
In our case we won't use the phonix webdav endpoint bur rather the grpc directly
and if you want to stick to http, the easiest woudl be to consume a grpc-http JSON gateway

Jörn Friedrich Dreyer @butonic 15:31
people only sync the /dav/files/ tree

Hugo Labrador @labkode 15:32
But these same people have access to the trashbin endpoint, if I got it correctly

Jörn Friedrich Dreyer @butonic 15:32
right so in the future we need to extend the cs3 api should we eg want to implement a service that lists metadata for a user

Hugo Labrador @labkode 15:33
yes, and I think you have already an issue open that listcontainer could return metadata associated with that resource
I think that should be enough
for listing favourites, comments, any metadta attached to a resource

Jörn Friedrich Dreyer @butonic 15:33
why not represent metadata as resources and use storage providers that implement the necessary functionality
it may not work for everything, but it certainly is flexible enough to handle anything I can currently think of.
even search could be implemented that way
versions and trashed files are resources ... we should treat them that way.
maybe something to discuss at the conference
it is definitely somthing we think we did right when revisiting the apis in owcloud

Hugo Labrador @labkode 15:36
In your case that will mean that resource operations needs to be namespaced and not enforce by the semantic of the API
by that I mean, that currenlty you can call ListVersions and you know that you receive the versions of a file
in you case, you will call ListContainer(/versions) and you will return resource types with the type set to "version", that enforces that /versions must be enfornced across the implemtors of the APIs for getting versions

Jörn Friedrich Dreyer @butonic 15:37
yes ... in reva we also have an implicit virtual file tree where we mount storages for a user

Jörn Friedrich Dreyer @butonic 15:52
so for the new /dav endpoint in owncloud we hav several namespaces:

/dav/files// - which is the users home
/dav/meta/ - which allows accessing metadate, currently only versions using /dav/meta//v
/dav/trash-bin// to list the trashbin. in the cs3 api we could return references to paths
/dav/avatars//. to get an avatar for the user - I think here a /dav/users//avatar/. would have been the better choice. It would allow listing users and their properties
I am not saying that we did everything correct, but instead of defining a protobuf spec for eg listing all the above we could repurpose the storage provider ... as a resource provider ...
Then we only need to agree on the type of resource that is listed but avatar pictures are just images. trashbin and versions also list files. users and groups would be interesting because if you want to list the properties of a user ... you could model each property as a (text) file/folder / reference

same goes for user roles ...

Hugo Labrador @labkode 15:55
I have two questions:

Jörn Friedrich Dreyer @butonic 15:55
could be handled with a resource provider

Hugo Labrador @labkode 15:55

  1. Why /dav/meta/fileid does not follow the convetion

Jörn Friedrich Dreyer @butonic 15:56
because it needs to uniquely identify a resource ... and the fileid is stable

Hugo Labrador @labkode 15:56
2) Those urls work for user-based paths, but imagine a project space that is a not attached to a user, how would you reprensent it?
/dav/project/<project_name>/trash-bin?
and so on?
so for 1) that means that a storage provider for a user will store the favourite information for another user?

Jörn Friedrich Dreyer @butonic 15:57
/dav/projects/ would probably be a more stable way ... but in general i think username and a unique id are interchangeable
oh cool idea
/dav/users/jfd/favorites would be a collection of references
and yes it would be implemented by a separate resource provider, one for listing users and their metadata
you can change the implementation as an admin by mounting a different resource provider to different locations in the virtual tree
that I am using the /dav prefix here is just legacy
but I think we should be able to com up with a good namespace especially to get the group storages or project spaces mounted properly
IIRC some solutions call it places

Jörn Friedrich Dreyer @butonic 16:03
so we could use that as a namespace for storages, eg /places/home/, /place/
same for shares? /shares//incoming outgoing ... well if they are different from places ...
anyway ... this feels right but it just erupted out of me ... so I may need a good round of discussion about this ... at the conference ;-)

Overwriting files does not work

Reported in, #45
can confirm locally

Access to XMLHttpRequest at 'http://localhost:20080/remote.php/webdav/20191007_105648.jpg' from origin 'http://localhost:8300' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource.
files.bundle.js:7 PUT http://localhost:20080/remote.php/webdav/20191007_105648.jpg net::ERR_FAILED

Sync when sharing to 100k users

Let me describe how reva and CS3 are going to handle the following corner case:

The scenario

Feature: Sharing to 100k users

  Background:
    Given there is a group "everyone" with 100k users
    And "Alice" shared "/Projects/X/docs/" with the group "everyone"
    And each user in "everyone" has accepted the "docs" share and mounted it to a random subfolder
    And each user in "everyone" has started a desktop client

  Scenario: change discovery
    When "Alice" changes "/Projects/X/docs/readme.md"
    Then each client will get a changed etag in the PROPFIND /, Depth 0 Response and start the discovery

  Scenario: change notification
    When "Alice" changes "/Projects/X/docs/readme.md"
    Then each client will receive a push notification that his mounted file changed

The problem

The moment Alice shares this to a group with 100k users 100k new mount points are created and 100k storages need to propagate the etag. In oc10 we do that by collecting all affected nodes in the filecache and their parents up to the root and then update them all at once. which causes a hotspot for the root nodes of storages that are affected by a lot of changes. That is more likely to happen by the number of incoming shares a user has mounted. It literally causes locking issues in the database.

The solution

In reva we limit the etag propagation to the storage provider in which it occured. There is a storage registry which knows where a user has mounted a share. Actually the moint point is persisted by the storage provider and a ListCollectionResponse will contain the mount point as a resource with type RESOURCE_TYPE_REFERENCE. The StorageRegistry is just a cache for the mount points so subsequent discovery can be done without having to traverse the whole tree.
The reva gateway can always calculate the etag of the / of a users home by querying the etags of all storages the user has access to and picking the youngest etag. This is cached to eliminate subsequent queries when nothing has changed.
When a file changes reva decides when to change the etag of every users discovery PROPFIND to / that has access. We don't just invalidate the etag because that would mean 100k clients trying to do a discovery at once. reva needs to be able to control when the clients will get notified of a change. Eventual sync is what users expect here.

Technical options

distributed etag propagation

We could store the etag in the storage provider and do a cross storage etag propagation by propagating the change up to all mounting storages. however that would require linking all receiving storages to the shared storage, which leads to a child knowing about its parents.

It also reproduces the original problem but instead of the database the storage provider now has to deal with the hotspot of a frequenly changing home etag.

active etag tree caching

We could use the gateway to cache the etag for every node from the root of a users home to every mount point. The sync discovery can always calculate the etag for a node in the tree by determining the responsible storage based on the mount points (that can be retrieved from the storage providers and are cached by the storage registry). The result of walking the mount tree and calculating the etag for the "virtual" nodes can be cached: when the first propfind causes the calculation of all these nodes the discovery can use the cache in the subsequent sync. Only the root node of the users homes will remain cached because that is what clients keep polling.

It avoids the hotspot by keeping the frequently changing nodes in a cache while at the same time giving reva the control over when to invalidate the cache. Instead of just using a TTL a background process can actively fill and overwrite the cache of virtual nodes to throttle the sync activity by thousands of clients to a manageable load.

Future considerations

push based sync

All of the above is the backbone of the state based sync that is currently used by the ownCloud client. With the new architecture implementing a notification or push based sync becomes more interesting because we can build it into the revad service.

We can either use the notifications endpoint or introduce long polling to the files dav endpoint. In either case we can return a response to the client when the cached etag of the root node is changed.

delta based sync

When a client goes offline a lot of changes might happen before he comes back online. The fallback mechanism will always be a state based sync that relies on the etag. However, using the notifications endpoint we can retrieve all notifications (including file changes) since the last request (usually tracked with a token). Using this token we can keep track of where a client is in the history of changes, omit the discovery and directly sync the changed files.

This requires keeping track of changes (which should be a separate activity service that just caches changes) as well as the current position of clients in that history. In oc10 the activity app uses a sub optimal way of storing changes and is truncated only after 356 days. There is a lot of room for improvement and we can limit the activity history cache to 30 days by default. (it should be possible to reconstruct older changes at least partly based on the versions in a storage) In case a client comes along whose token is older than the known history we tell him to go use the state based sync.

Avatars

avatars should be available at an IdP, but it will require additional cors setup. also owncloud already has an avatars andpoint that we need to implement it can look up the avatar url in the user metadata and either stream the content or eg fetch it from ldap. We might have to cache them in our service.

HTTP/1.1 200 OK
Content-Type: application/json

{
 "sub"         : "248289761001",
 "name"        : "Jane Doe"
 "given_name"  : "Jane",
 "family_name" : "Doe",
 "email"       : "[email protected]",
 "picture"     : "http://example.com/janedoe/me.jpg"
}

The ocs service can fetch that picture url, cache it an serve it when a requests avatars, eg. when searching recipients.

[EPIC] In place migration of oc10 to ocis

In place migration of oc10 to ocis

Milestone issue list: Migration

Scenario

This scenario is based on the one for the data_exporter requirements.

Given an instance at https://demo.owncloud.com where user einstein shared a folder /photos with marie who mounted it as /projects/abc this is the flow that is planned to migrate an instance user by user:

  • cover reshares
  • cover (re-)shares to groups

0. Prerequisites

1. Set up reverse proxy

  1. make oc10 available at https://oc10.owncloud.com as well
  2. Introduce OCIS as proxy for https://demo.owncloud.com
  • add proxy middleware for reva

OCIS is used to forward requests for unmigrated users to the oc10 instance/domain.

  • it keeps a list of migrated users do decide which requests to forward

Migrated users will be hosted by ocis

2. Migrate groups (optional)

Can be skipped if all groups are maintained in an LDAP server that has been configured for oc10 & ocis

  • occ export:groups TODO there is only occ groups:list
  • occ import:groups TODO use occ group:add

3. Migrate user by user

  1. in oc10 export user file and share metadata: occ export:user einstein, occ export:user marie, occ export:user richard
  1. in ocis reva import marie --user-iss https://idp.owncloud.com --user-sub cb1dd81e-6967-44f7-8239-dffbbe319e92
  • imports the user marie with iss https://idp.owncloud.com and sub cb1dd81e-6967-44f7-8239-dffbbe319e92 into ocis
    • create the storage if the user does not exist yet
    • add storage registry entry? for now the static registry will use the same storage provider for all the users.
  • shares to migrated users are recreated as internal shares (for the first user, no shares will be migrated)
    • due to path based references we need to have the list of oc10 usernames to ocis sub@iss mappings available to properly rewrite the userid in references
  • shares to not migrated users are created as federated shares to the old instance
    • with a user id mapping file we can set the permissions in the storage
    • implement a handler for the federated sharing api
      • needs an auth provider for the federated sharing id
  • shares from migrated users are mounted
    • can be determined if the exported reference can be resolved to an existing file/folder, including the userid mapping
  • shares from non migrated users are skipped, they will be migrated in the next step
  • public shares are recreated with the same token
  • import file metadata, WIP PR cs3org/reva#299 needs cs3org/reva#289

Repeat the import and export for as many users as desired

  1. s1: occ migrate:shares marie https://demo.owncloud.com
  • shares to marie are converted into federated shares to [email protected]
    • use a username -> iss & sub mapping file
    • creating federated shares will send a federated sharing request to ocis... or not because oc10 thinks it is still responsible for both domains.
    • needs to be done using sql
    • the command needs an API to create shares in OCIS
      • for shares from/to migrated users, internal shares are set up
      • for shares from/to non migrated users, federated shares are created.
  1. s1: occ user:delete marie
  • but without actually deleting files?
  • mark user as migrated for the proxy
  1. migrate trash in a dedicated step?
  2. migrate versions in a dedicated step?

migrate data (file content)

  • data needs to be moved from old ownCloud data directory into new eos layout
    • no files (files_trashbin, files_versions) folder per user, we need to move files to new user home
      • to import versions into EOS, see below
      • to import trash into EOS, see below
    • owner changes from www-data to the actual user (users must be known to the underlying os, so ACLs can be set up correctly)

proxy

  • user by user migration needs to send users to either revad or oc10 apache servers
    • task for the authentication / reverse proxy, likely a header that is set during login
    • can we make reva act as a proxy for the old oc10 so we can intercept the traffic and send users to the right instance
      • would that help with sharing?
      • would this allow sharding instances
  • how do we authenticate users
    • phoenix supports oidc and oauth2, but no basic auth
    • revad supports oidc and can use basic auth
    • since we have to control the users anyway, in order to set ACLs for guest accounts we may have to manage our own ldap server that can then be used to provide users for an oidc capable IdP

to implement search add REPORT verb to ocdavsvc

phoenix uses it to search files. cs3apis do not talk about search.

  • add it to reva?
  • add it to nexus?
    • as elasticsearch + reva service?
    • as reva servisce that walkse the tree recursively? for testing?
  • add elasticsearch to storages directly? Why if cs3 has no api endpoint for it? should we add search to cs3?
  • if we use elasticsearch should we use proper query language?

Authorization headers need to be passed on

The current implementation of reva uses the gateway to authenticate, route and trace all requests in a stateless service. The idea is that we can set up the actual storage, sharing and user provider services as needed and scale the gateway as needed, because all of the 'backend' services need to authenticate requests.

Currently, the gateway replaces whatever credentials are used with a jwt token that contains the user id and other metadata like displayname, email and username.

The problem is that the original credentials are not passed on. This creates a problem when they are needed to access other services in the organization. I can identify two use cases:

  • Kerberos tokens will be needed to mount windows shares
  • OIDC bearer tokens are needed to make requests against the graph api, eg. to look up users for sharing or create guest accounts.
    • in this case it is debateable if we should pass on the users bearer token or if we should use a system user that acts on behalf of users. AFAICT it makes sense to use the user token, but more effort to understand how to properly use scopes and let the user give consent that reva will access his graph is adviseable.

I prototyped using the bearer token as the internal jwt token in cs3org/reva#382. While that works nice for jwt based self signed tokens as used by kopano it is horrible for the recommended opaque tokens that need an additional introspection request against the idp. Every reva process would need to to the introspection again, hammering the IdP with introspection requests.

So, I went on and implemented cs3org/reva#384 which is needed to make oidc work with ldap or the kapi graph as a user backend. Currently, the gateway will authenticate credentials and then look up the user metadata based on the userid.

It seems reasonable to combine the two PRs and

  1. extend the oidc token manager with a redis cache for introspection requests. jwt based tokens work just fine
  2. allow a chain of token strategies, because it might be a bearer token, a kerberos ticket or a jwt token we created ourself when replacing basic auth credentials.

Writing this down I realized that another requirement we have is being able to extend whatever authentication is used with additional roles. That is only possible if we use our own jwt token. So we need to do this the other way round: store the original credentials in our jwt token. While it seems counter intuitive to store a jwt in a jwt it is the only way to pass on the original jwt bearer token (or a kerberos token) and additional metadata.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.