Giter Club home page Giter Club logo

cooked-validators's Introduction

Copyright Tweag I/O 2023

With cooked-validators you can test Cardano smart contracts (including Plutus v2 features) by writing potentially malicious offchain code. You can also use the library to write "normal" offchain code in a comfortable and flexible way.

In particular, cooked-validators helps you

  • interact with smart contracts written in Plutus (as well as any other language that compiles to UPLC, like for example Plutarch, by loading contracts from byte strings),
  • generate and submit transactions declaratively, while automatically taking care of missing inputs and outputs, balancing, and minimum-Ada constraints,
  • construct sequences of transactions in an easy-to-understand abstraction of "the blockchain", which can be instantiated to different actual implementations,
  • run sequences of transactions in a simulated blockchain,
  • apply "tweaks" to transactions right before submitting them, where "tweaks" are modifications that are aware of the current state of the simulated blockchain, and
  • compose and deploy tweaks with flexible idioms inspired by linear temporal logic, in order to turn one sequence of transactions into many sequences that might be useful test cases.

The library is geared specifically towards testing and auditing (already existing) on-chain code.

You are free to copy, modify, and distribute cooked-validators under the terms of the MIT license. We provide cooked-validators as a research prototype under active development, and it comes as is with no guarantees whatsoever. Check the license for details.

How to integrate cooked-validators in a project

This guide shows you how to use cooked-validators in a haskell project using Cabal to create and validate a simple transaction.

Before using cooked-validators, you need

  1. If you have no constraint on the version of plutus-apps, copy the file cabal.project to your project and adapt the packages stanza.
  2. Add the following stanza to the file cabal.project
    source-repository-package
      type: git
      location: https://github.com/tweag/cooked-validators
      tag: v2.0.0
      subdir:
        cooked-validators
    
  3. Make your project depend on cooked-validators and plutus-script-utils
  4. Enter a Cabal read-eval-print-loop (with cabal repl) and create and validate a transaction which transfers 10 Ada from wallet 1 to wallet 2:
    > import Cooked
    > import qualified Plutus.Script.Utils.Ada as Pl
    > printCooked . runMockChain . validateTxSkel $
          txSkelTemplate
            { txSkelOuts = [paysPK (walletPKHash $ wallet 2) (Pl.adaValueOf 10)],
              txSkelSigners = [wallet 1]
            }
    [...]
    - UTxO state:
       pubkey #a2c20c7 (wallet 1)
        - Lovelace: 89_828_471
        - (×9) Lovelace: 100_000_000
       pubkey #80a4f45 (wallet 2)
        - Lovelace: 10_000_000
        - (×10) Lovelace: 100_000_000
       pubkey #2e0ad60 (wallet 3)
        - (×10) Lovelace: 100_000_000
       pubkey #557d23c (wallet 4)
        - (×10) Lovelace: 100_000_000
       pubkey #bf342dd (wallet 5)
        - (×10) Lovelace: 100_000_000
       pubkey #97add5c (wallet 6)
        - (×10) Lovelace: 100_000_000
       pubkey #c605888 (wallet 7)
        - (×10) Lovelace: 100_000_000
       pubkey #8952ed1 (wallet 8)
        - (×10) Lovelace: 100_000_000
       pubkey #dfe12ac (wallet 9)
        - (×10) Lovelace: 100_000_000
       pubkey #a96a668 (wallet 10)
        - (×10) Lovelace: 100_000_000

Documentation

The rendered Haddock for the current main branch can be found at https://tweag.github.io/cooked-validators/.

The CHEATSHEET is a nice entry point and helper to keep on sight. It contains many code snippets to quickly get an intuition of how to do things. Use it to discover or search for how to use features of cooked-validators. Note that this is not a tutorial nor a ready-to-use recipes book.

We also have a repository of example contracts with offchain code and tests written using cooked-validators.

Please also look at our issues for problems that we're already aware of, and feel free to open new issues!

cooked-validators's People

Contributors

0xd34df00d avatar ak3n avatar carlhammann avatar dependabot[bot] avatar etiennejf avatar facundominguez avatar florentc avatar gabrielhdt avatar guillaumedesforges avatar guillaumegen avatar lucaspena avatar maximilianalgehed avatar mmontin avatar niols avatar serras avatar victorcmiraldo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cooked-validators's Issues

Can we start doing releases?

There has been a lot of great work since the most recent release (March 2 😱) . I am working on an IOG project that is depending on a cornucopia of forks and arbitrary pinned commits, and am hoping to assess the work of depending on this official repo. It would be helpful in general to have more frequent minor releases and change logs in order to do the assessment, and also to track down changes to the dependencies of this lib. Is this something that we are able to start doing?

Handle `mustPayToPubKeyWithDatum` constraint

Looking at the plutus-apps repos, I see the following was merged: IntersectMBO/plutus-apps#146
There are some places in our code-base that we assumed that the existence of a datum implied the address was a ScriptCredential; that assumption will be broken in the future, we should investigate this. IIRC this is probably around generateTx' and validateTx' within Cooked.MockChain.Monad.Direct.

does not work on mac

On

  • MacBook Pro (13-inch 2019, Four Thunderbolt 3 ports)
  • Processor 2.8 GHz Quad-Core Intel Core i7
  • Memory 16GB 2133 MHz LPDDR3
  • nix (Nix) 2.5.1
> git clone [email protected]:tweag/plutus-libs.git
Cloning into 'plutus-libs'...
remote: Enumerating objects: 1067, done.        
remote: Counting objects: 100% (498/498), done.        
remote: Compressing objects: 100% (288/288), done.        
remote: Total 1067 (delta 336), reused 246 (delta 206), pack-reused 569        
Receiving objects: 100% (1067/1067), 318.19 KiB | 1.35 MiB/s, done.
Resolving deltas: 100% (537/537), done.
> cd plutus-libs/
> nix-shell 
error: Package ‘systemd-247.6’ in /nix/store/wk2ffi76ga742rp9q3krhkd2qvxy32wk-nixpkgs-src/pkgs/os-specific/linux/systemd/default.nix:534 is not supported on ‘x86_64-darwin’, refusing to evaluate.

       a) To temporarily allow packages that are unsupported for this system, you can use an environment variable
          for a single invocation of the nix tools.

            $ export NIXPKGS_ALLOW_UNSUPPORTED_SYSTEM=1

       b) For `nixos-rebuild` you can set
         { nixpkgs.config.allowUnsupportedSystem = true; }
       in configuration.nix to override this.

       c) For `nix-env`, `nix-build`, `nix-shell` or any other Nix command you can add
         { allowUnsupportedSystem = true; }
       to ~/.config/nixpkgs/config.nix.
(use '--show-trace' to show detailed location information)

Build Haddocks

We would like to have some sort of GH Action that builds the Haddocks once we push to master, and puts them somewhere (AWS? GH Pages?).

Trim dependencies for better integration with dApp codebases

Is your feature request related to a problem? Please describe.
I'd like to integrate this library into our codebase but the dependencies according to the current plutus-apps release is quite heavy-weight and easy to conflict with newer dependencies.

Describe the solution you'd like
Aggressively trimming dependencies on the next plutus-apps release would save a lot of resources building this library for both maintainers and library users. Fewer dependencies mean fewer conflicting risks integrating the library into dApp codebases as well.

Describe alternatives you've considered
Having to revert existing dependencies and API breaking change fixes, and add unused dependencies to integrate this library into our dApp codebase.

Additional context
Upstream is already moving well on this end:

I'd love to open a PR to do this after the next plutus-apps release!

`MonadBlockChain` instance for `Contract` does not take `TxOpts` into account

The MonadBlockChain instance for Plutus Contract does not take options TxOpts in transaction skeletons into account. Especially regarding balancing and ensuring every utxo has the minimum required amount of ada.

For example, take a skeleton that specifies a public key utxo should be spent to mint an NFT and put in at a script address, with the adjustUnbalTx option enabled. The instance for Contract generates a transaction with only one script output carrying only the NFT. Unlike the transaction generated by Direct, it misses a payback output for the public key and the 2 adas in the script output.

Set up CI with inter-branch caching

Would be good to set up some CI infrastructure for this repo. This could help maintaining the examples in tandem with cooked-validators and make sure we don't break one without fixing the other.

Function to decide when two `Constraints` are semanically equal

The Constraints type has an Eq instance, which implements a very fine notion of equality. That's good, as we discussed when we defined it. Sometimes, however, we want to compare two Constraints and the question we have in mind is "Do they describe the same transaction?".

In the current reworking of the attack framework, the function sameConstraints tries to answer that question. At the moment, is is only used to make the modified transactions generated by the doubleSatAttack unique. There's also assertSameConstraints, which is based on the same logic and is used in a few test cases.

The function sameConstraints identifies the following:

  • Constraints that differ by a reordering of input (i.e. SpendsPK and SpendsScript) constraints: Transactions are supposed to have set of inputs but a list of outputs. See the EUTXO paper (search "for set of inputs").
  • Constraints that specify the same validity time range: [Before b, After a] :=>: os and [ValidateIn $ Interval a b] :=>: os are equivalent, for example.
  • Constraints that mint the same values with the same redeemers: [Mints a _ (v1 <> v2), Mints b _ v3, Mints b _ v4] :=>: os and [Mints a _ v1, Mints b _ (v3 <> v4), Mints a _ v2] :=>: os are equivalent, where the underscores denote the needed lists of minting policies.

The question is

  • whether these identifications really hold, looking at how things are implemented at the moment, and
  • whether we could (and should) in fact identify even more.

At the moment, since the use of sameConstraints is still very limited, the worst thing that can happen is the double satisfaction attack generating slightly too many or few cases; for the future I think that we should get some function like sameConstraints right.

Role of datums in `SpendsScript`

Currently SpendsScript :: (SpendsConstrs a) => Pl.TypedValidator a -> Pl.RedeemerType a -> (SpendableOut, Pl.DatumType a) -> MiscConstraint carries a datum simultaneously of theSpendableOut that we are spending.
This is redundant, since the SpendableOut already points to one unambiguous output.

As far as I remember, the purpose of having this datum in the Constraint was to be able to pretty-print it even outside of a MonadBlockChain (so without having the possibility to recover the datum corresponding to one UTxO). However, now, the pretty printing of the SpendsScript constraints is "<constraint without pretty def>" (see https://github.com/tweag/plutus-libs/blob/13e9d57b766a527e2aa8b9028af998892874592a/cooked-validators/src/Cooked/Tx/Constraints/Pretty.hs#L55).

Three options:

  1. I completely missed one of the use of this datum and it is impossible to suppress it;
  2. We want to suppress it;
  3. We want to use it for pretty-printing (note that this point is compatible with point 1).

Rely on `Ledger.Validation` instead of `Ledger.Index` for more faithful transaction validation

I recently learnt that Ledger.Validation is significantly more faithful to how transaction validation happens in a real node. We should stop depending on Ledger.Index and rely on Ledger.Validation ASAP.

We can see examples of both validations happening side-by-side here. It seems like fee computation will be the hardest bit that needs changing (example)

Thank you @sjoerdvisscher for pointing this out to us!

cooked-scripts: refactor `balanceTxFrom` into smaller, composable functions and move it to `Cooked.Tx.Balance`

Currently, balanceTxFrom is a large monolithic function and is hard to test; would be good to split it up into multiple composable functions that would enable us to write some simple, but crucial, property-based tests.

Also, we should bring in our own Cooked.Wallet

One option would be factoring it into three functions:

  1. Pure balancing function, that receives a Ledger.Value and a list of utxo's it can spend (essentially removing the need to run this in the MockChain monad.
  2. A function that gets the list of utxo's from a given PK and calls functino (1) above
  3. A function that gets a Plutus.UnbalancedTx and calls (2) above

This would enable us to write property-based tests for the core functionality and increase our confidence that this code has no bugs.

Load arbitrary UPLC scripts with out framework

Could we support loading arbitrary UPLC scripts? To an extent, there is nothing really magical to what we do with Plutus itself: we just pass the scripts along to be ran by transaction validation. I can imagine that we might be able to load aribtrary UPLC scripts
with out framework and accomplish a similar thing.

https://github.com/input-output-hk/plutus/blob/1f31e640e8a258185db01fa899da63f9018c0e85/plutus-ledger-api/src/Plutus/V1/Ledger/Scripts.hs#L97

TestTree to check the richness of a wallet

I would like to be able to state not only that the transaction validates, but also to check that the result is the one I expect, especially to check that a specific wallet has received the token I wanted to give them.

Understand whether we need to check for `minUTxOValue` or whether the plutus code we depend does so automatically

Apparently, there must be some amount of Ada in each UTxO, depending on the size of the data; this is surprising since I can recall creating transactions that create UTxOs with no Ada and some datums.

We must understand where this would be checked within Plutus, so we can decide whether:
i. We get this check for free; or
ii. We need to check for this ourselves; or
iii. It would actually be possible for this checked to be skipped by sending raw transactions, in which case, we don't care about this.

Babbage / Vasil update

Is your feature request related to a problem? Please describe.
I would like to use cooked-validators with plutus V2 and the latest version of cardano-node (v1.35.3).

Describe the solution you'd like
cooked-validators should be be updated to support the changes introduced in the Vasil hardfork and Plutus v2.

Describe alternatives you've considered
No alternatives

Additional context
Work on updating plutus-apps is happening in the next-node branch, which could be used as a starting point for cooked-validators

Interpret a `MockChain` into the Plutus `Contract` monad.

If would be great if we could implement or interpret MonadMockChain into the Contract monad, this way we could really sell our library as a superior option for defining the lowest level of off-chain code: the part that generates transaction constraints.

Pretty print `Cooked.MockChain.Base.UtxoState` in a readable manner

The result of runMockChain, simply returns the resulting UtxoState == Map Addr [(Value, Maybe Datum)]. We need some form of pretty-printing for that map if we ever want to use that while developing transactions. Perhaps, we even want a few variations on the theme: one that only prints values; one that prints values and datums; one that prints only what changed from some other UtxoState; etc...

A reasonable output for this would look something like:

pubkey 132af#04ed21:
  - { "Ada": 1000; ("Currency1", "SomeToken"): 200 }
script 4234bf#1248ab:
  - { "Ada": 1230 }
    (SomePrettyPrint ofTheDatum)
...

An amazing output would be something like:

pubkey 132af#04ed21:
  - { "Ada": 1000; ("Currency1", "SomeToken"): 200 }
script "pmultisig" 4234bf#1248ab:
  - { "Ada": 1230 }
    (SomePrettyPrint ofTheDatum ConvertedTo PMultiSigsDatum)
...

Notice the name of the script would be explicit and the datums would be converted to whatever type we
used in the script instead of PlutusTx.Data.

Lets not let perfect be the enemy of good though; having the first option above will already be great! We can think of how to go from there later.

Mapping cooked behaviour to plutus behaviour regarding the translation from posix intervals to time slots.

Cooked behaves differently than Plutus when going from an abstract interval to concrete time slots, leading to inconsistent closures for the boundaries of translated intervals.

The Plutus function in question can be found here:
https://playground.plutus.iohkdev.io/doc/haddock/plutus-contract/html/Plutus-Contract-Request.html#v:currentTime

While the Cooked function can be found here:
https://github.com/tweag/plutus-libs/blob/04f8e7051ef41f3a3860bacd178704273b6aa29b/cooked-validators/src/Cooked/MockChain/Monad/Direct.hs#L226

In practice, Plutus uses the latest time of the current slot, while Cooked uses the earliest, leading to inconsistencies.

Explicit sanity checks for Transaction Skeletons

The problem
We sometimes accidentally generate TxSkels that are invalid for reasons like missing inputs, negative Ada amounts, etc. (At least I hope I'm not the only one prone to such mistakes.) Yesterday, I spent more than an hour debugging because of a negative Ada amount that sent the validation into a loop, somehow.

Possible solution/Question
I'd like to have a function isSane :: SanityOptions -> TxSkel -> Bool, maybe also a function sanitize :: SanityOptions -> TxSkel -> TxSkel, for some suitable type SanityOptions, that I can call on my TxSkel and figure out if and how it is broken before handing it to validation.

If something like that already exists, it's at least hidden enough that I've not yet found it.

Change `validateTx` for the contract monad, making it more flexible

We can implement validateTx for instance MonadBlockChain (Contract w s e) in terms of submitUnbalancedTx, giving us an option to actually hook the unsafeModTx function in the PAB. This would have enabled some clients to use the PAB through cooked-validators instead of having to write their own; Moreover, we can also make adjustUnbalTx work by calling it in between. Maybe we can even start relying on our balancing mechanism

Following installation instructions leads to building GHC when entering nix-shell

Describe the bug

Expected behavior

  • I expect nix to download whatever it can from the IOHK's caches and build the rest, but not GHC

Environment

  • Linux haskell-dev-vm-1 5.13.0-1012-gcp #15-Ubuntu SMP Wed Jan 12 19:18:58 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
  • Version of the code: 12f6915

Additional context

I am not a nix expert (actually, I hate nix :) ) but use it on a daily basis on the project I am working on without this kind of issue.

Generate a latex output following a test tree hierarchy

For now, the latex output of a Tasty test tree only takes into account leaves with metadata and generates a flat sequence of entries in the latex document. Some test labels also only make sense with the context provided by labels on parent nodes

Generating a document following the hierarchy of the test tree would be a nice feature. It feels natural in the workflow of an audit to benefit from descriptions written along the tests in the code.

This would require either or both:

  • ability to add metadata to test groups (currently a runtime error)
  • generating entries in the latex document regardless of the presence of metadata or when at least one child has some metadata

More flexible and general attacks

We have the beginnings of a domain specific language for attacks, and I'm thinking about how to extend it. These are mostly my ramblings, which I nonetheless want to share here; maybe you have comments and ideas.

At the moment, the entry point are the modalities somewhere and everywhere. While they are already a very good interface to turn "harmless" traces into "malicious" ones, they are not enough, for example, to check if a minting policy actually enforces NFTs: Look at the test for the token duplication attack, and you will see that the "careful" policy there only ensures that at most a certain amount of tokens is minted in one transaction. In order to write a test that tries to mint again in a second transaction, we presently need to manually write a trace with at least two transactions.

Another nice-to-have would be the ability to combine attacks conjunctively or disjunctively; think of a combinator somewhereA :: (TxSkel -> [TxSkel]) -> m a -> m a, such that:

somewhereA (\tx -> [f tx, g tx]) (validateTx a >> validateTx b) =~= 
  [ validateTx (f a) >> validateTx b
  , validateTx (g a) >> validateTx b
  , validateTx a >> valiateTx (f b)
  , validateTx a >> validateTx (g b) ]

(Thanks to @VictorCMiraldo for that idea!)

At the moment, there are three ideas floating around in my head:

  1. "Simply" write a few combinators like somewhereA. Note that the functionality of somewhereA could also be accomplished by writing two somewhere-test cases on the same trace.
  2. Change the types of somewhere and everywhere to take something more general than Maybe.Again somewhereA kind of suggests a semantics for any Foldable...
  3. (my preferred one) Try to add more "modal logic" operators to MonadModal, so that we can write somewhere f `or` somewhere g to get the functionality of somewhereA, or even somewhere attackThatStealsAToken `followedBy` somewhere attackThatPaysTheToken, where followedBy is some sort of "implication that also knows about the order of transactions" in our logic.

What do you think?

HLS works in cooked-validators, fails in examples

Note: I named the directory plutus-libs-tweag (instead of plutus-libs).

cd plutus-libs-tweag
nix-shell
emacs

In emacs, navigate to any haskell file in cooked-validators/src/*. HLS works correctly.
Note: HLS works in cooked-validators/tests/* too.
For success, the emacs buffer *lsp-haskell:stderr* contains:

Found "/Users/hcarr/plutus-libs-tweag/hie.yaml" for "/Users/hcarr/plutus-libs-tweag/a"
Run entered for haskell-language-server-wrapper(haskell-language-server-wrapper) Version 1.5.1.0 x86_64 ghc-8.10.4.20210212
Current directory: /Users/hcarr/plutus-libs-tweag
Operating system: darwin
Arguments: ["--lsp","-d","-l","/tmp/hls.log"]
Cradle directory: /Users/hcarr/plutus-libs-tweag
Cradle type: Cabal

Tool versions found on the $PATH
cabal:		3.6.2.0
stack:		2.7.3
ghc:		8.10.4.20210212


Consulting the cradle to get project GHC version...
Project GHC version: 8.10.4.20210212
haskell-language-server exe candidates: ["haskell-language-server-8.10.4.20210212","haskell-language-server"]
Launching haskell-language-server exe at:/nix/store/bqy1n8k5jb2hpf3zllg99qr7v49r9gbl-haskell-language-server-exe-haskell-language-server-1.5.1.0/bin/haskell-language-server
haskell-language-server version: 1.5.1.0 (GHC: 8.10.4.20210212) (PATH: /nix/store/bqy1n8k5jb2hpf3zllg99qr7v49r9gbl-haskell-language-server-exe-haskell-language-server-1.5.1.0/bin/haskell-language-server)
Starting (haskell-language-server)LSP server...
  with arguments: GhcideArguments {argsCommand = LSP, argsCwd = Nothing, argsShakeProfiling = Nothing, argsTesting = False, argsExamplePlugin = False, argsDebugOn = True, argsLogFile = Just "/tmp/hls.log", argsThreads = 0, argsProjectGhcVersion = False}
  with plugins: [PluginId "pragmas",PluginId "floskell",PluginId "fourmolu",PluginId "tactics",PluginId "ormolu",PluginId "stylish-haskell",PluginId "retrie",PluginId "brittany",PluginId "callHierarchy",PluginId "class",PluginId "haddockComments",PluginId "eval",PluginId "importLens",PluginId "refineImports",PluginId "moduleName",PluginId "hlint",PluginId "splice",PluginId "ghcide-hover-and-symbols",PluginId "ghcide-code-actions-imports-exports",PluginId "ghcide-code-actions-type-signatures",PluginId "ghcide-code-actions-bindings",PluginId "ghcide-code-actions-fill-holes",PluginId "ghcide-completions",PluginId "ghcide-type-lenses",PluginId "ghcide-core"]
  in directory: /Users/hcarr/plutus-libs-tweag

In emacs, navigate to any haskell file in examples/src/*. HLS fails.

The emacs *Messages* buffers contains:

LSP :: Connected to [lsp-haskell:93830].
LSP :: lsp-haskell has exited (exited abnormally with code 1)
LSP :: Sending to process failed with the following error: Process lsp-haskell not running
next-line: End of buffer
Server lsp-haskell:93830 exited with status exit(check corresponding stderr buffer for details). Do you want to restart it? (y or n) n

The emacs *lsp-haskell:stderr* buffer contains:

Found "/Users/hcarr/plutus-libs-tweag/hie.yaml" for "/Users/hcarr/plutus-libs-tweag/a"
Run entered for haskell-language-server-wrapper(haskell-language-server-wrapper) Version 1.5.1.0 x86_64 ghc-8.10.4.20210212
Current directory: /Users/hcarr/plutus-libs-tweag
Operating system: darwin
Arguments: ["--lsp","-d","-l","/tmp/hls.log"]
Cradle directory: /Users/hcarr/plutus-libs-tweag
Cradle type: Cabal

Tool versions found on the $PATH
cabal:		3.6.2.0
stack:		2.7.3
ghc:		8.10.4.20210212


Consulting the cradle to get project GHC version...
Project GHC version: 8.10.4.20210212
haskell-language-server exe candidates: ["haskell-language-server-8.10.4.20210212","haskell-language-server"]
Launching haskell-language-server exe at:/nix/store/bqy1n8k5jb2hpf3zllg99qr7v49r9gbl-haskell-language-server-exe-haskell-language-server-1.5.1.0/bin/haskell-language-server
haskell-language-server version: 1.5.1.0 (GHC: 8.10.4.20210212) (PATH: /nix/store/bqy1n8k5jb2hpf3zllg99qr7v49r9gbl-haskell-language-server-exe-haskell-language-server-1.5.1.0/bin/haskell-language-server)
Starting (haskell-language-server)LSP server...
  with arguments: GhcideArguments {argsCommand = LSP, argsCwd = Nothing, argsShakeProfiling = Nothing, argsTesting = False, argsExamplePlugin = False, argsDebugOn = True, argsLogFile = Just "/tmp/hls.log", argsThreads = 0, argsProjectGhcVersion = False}
  with plugins: [PluginId "pragmas",PluginId "floskell",PluginId "fourmolu",PluginId "tactics",PluginId "ormolu",PluginId "stylish-haskell",PluginId "retrie",PluginId "brittany",PluginId "callHierarchy",PluginId "class",PluginId "haddockComments",PluginId "eval",PluginId "importLens",PluginId "refineImports",PluginId "moduleName",PluginId "hlint",PluginId "splice",PluginId "ghcide-hover-and-symbols",PluginId "ghcide-code-actions-imports-exports",PluginId "ghcide-code-actions-type-signatures",PluginId "ghcide-code-actions-bindings",PluginId "ghcide-code-actions-fill-holes",PluginId "ghcide-completions",PluginId "ghcide-type-lenses",PluginId "ghcide-core"]
  in directory: /Users/hcarr/plutus-libs-tweag
 haskell-language-server-wrapper: callProcess: /nix/store/bqy1n8k5jb2hpf3zllg99qr7v49r9gbl-haskell-language-server-exe-haskell-language-server-1.5.1.0/bin/haskell-language-server "--lsp" "-d" "-l" "/tmp/hls.log" (exit -11): failed

Process lsp-haskell stderr finished

Environment

  • Mac OS 11.6.2
  • Version of the code: 29f041e

Understand why scripts loaded from binary UPLC have a different address.

The Problem

Scripts loaded from binary UPLC have a different address than scripts loaded from a PlutusTx.compile; because the address
of a script (AFAIK) is the hash of its source, this behavior raises an alarm that serializing then deserializing a script could be changing its source.

The Test

Here is the test that should pass, but was marked as expectedFail until we investigate this issue.
Here's the output of running the test suite as of f0501c3:

  SplitSpec imported from UPLC
    Simple example succeeds:                                OK (0.04s)
    Same address as compiled script:                        FAIL (expected)
      tests/SplitUPLCSpec.hs:90:
      expected: Address {addressCredential = ScriptCredential 7cf78c7583416d52a7cec69455392fe690ed4f02c832f5742eb4be71, addressStakingCredential = Nothing}
       but got: Address {addressCredential = ScriptCredential de83fdcc19466a286e174a03de5ec9228ef642a7ab607ddf32e8f962, addressStakingCredential = Nothing} (expected failure)

Deserializing Scripts

The splitValidator is defined as:

splitValidator = Scripts.mkTypedValidator @Split
                   $$(PlutusTx.compile [||validateSplit||])
                   $$(PlutusTx.compile [||wrap||])

by the definition of mkTypedValidator, is somewhat equivalent to:

sv = Scripts.mkValidatorScript $
       $$(PlutusTx.compile [||wrap||])
       `applyCode` $$(PlutusTx.compile [||validateSplit||])

which, in turn, expands to (through mkValidatorScript)

sv = Validator . fromCompiledCode $
        $$(PlutusTx.compile [||wrap||])
        `applyCode` $$(PlutusTx.compile [||validateSplit||])

Now, expanding fromCompiledCode we have:

sv = Validator . fromPlc . getPlc $
       $$(PlutusTx.compile [||wrap||])
       `applyCode` $$(PlutusTx.compile [||validateSplit||])

Now, because unflat . flat should be equal to Right, I'd have expected Right sv to be equivalent to:

Right sv = fmap (Validator . fromPlc) . unflat . flat . getPlc $
             $$(PlutusTx.compile [||wrap||])
             `applyCode` $$(PlutusTx.compile [||validateSplit||])

Finally, to wrap all of this into a TypedValidator Split, we do:

splitFromBS = unsafeCoerce . fmap (unsafeMkTypedValidator . Validator . fromPlc) . unflat $ splitBS

where splitBS is defined as:

splitBS = flat . getPlc $
  $$(PlutusTx.compile [||wrap||])
  `applyCode` $$(PlutusTx.compile [||validateSplit||])

The Punchline

I'd have expected that splitFromBS == validateSplit, but that is clearly not the case since these scripts end up with different addresses

Modularise the attack language

With PR #140 merged, I begin to feel the need to start thinking about modularising our attacks. For example, the 'doubleSatAttack' is conceptually an attack that adds some constraints, followed by a "balancing attack", that pays any added value to the attacker. This second step is common to almost all attacks; it seems reasonable to have a balanceAttack.

If Attacks are the atoms of our attack language, let's start looking for particularly small atoms, and rely on the Monoid instance of Attack to combine them into double satisfaction, datum hijacking, ... attacks.

Is there a reason to use the optics package?

Is your feature request related to a problem? Please describe.
The constraint optics are defined using optics instead of the types from lens. I think it is doing this because it's a lighter weight package? Since lens is by far more popular in downstream dependencies, it often forces projects to depend on both libraries anyway.

Describe the solution you'd like
What about using microlens instead? Possibly even lighter weight and out-of-the-box compatible with lens.

Improve pretty printing of the number of Ada

A small improvement could be made to the pretty printer to conveniently display the number of Ada by splitting the Ada and the Lovelace, and regroup the digits by 3, as follows:

From Ada: 87999950 to Ada: 87 ; Lovelace: 999 950

This would make the values more readable and allow us to focus on the number of Ada rather than the lovelaces as a whole.

How to deal with existential type variables of constraint types in writing attacks?

Say you want to write an attack that tampers with datums on PaysScript constraints. The problem with such an attack is the existential type variable in the PaysScript constructor:

  PaysScript ::
    (PaysScriptConstrs a) =>
    Pl.TypedValidator a ->
    Pl.DatumType a ->
    Pl.Value ->
    OutConstraint

That is, there is no straightforward way to apply a monomorphic function that does some modification to a specific DatumType without some type hackery, as we used for example in the definition of the Eq instance for OutConstraint.

It might be possible to write a functions with types like

applyHeterogeneous :: forall a b c. (Typeable a, Typeable b) => (b -> c) -> a -> Maybe c

(which returns Just its first argument applied to the second argument, iff the types a and b match) and abstract common type hacks, but maybe there's a more principled solution?

Question: can we make a blanket double satisfaction attack?

A common attack to most scripts is to rely on a double satisfaction vulnerability: the combination of consuming two utxos that would cause the script to validate but in some slightly off manner. For example, when auditing a DEX we always try to execute the refunding of an order, but consume two order UTxOs instead and pay the difference to an attacker.

Can we make this into a blanket attack in cooked, just like we have with datum hijacking and token duplication?

Fix `balanceTxFrom`

Our balanceTxFrom code is buggy. It needs way more tests. There are two situation where it tries to duplicate values:

  1. When leftover == mempty, there should be no added UTxO to the outputs.
  2. When the usedUTxO is in the transaction already, the leftover has to be adapted because it is compute assuming the transaction
    would consume NEW usedUTxOs.
balanceTxFrom :: (Monad m) => Bool -> Wallet -> Pl.UnbalancedTx -> MockChainT m Pl.Tx
balanceTxFrom dbg w (Pl.UnbalancedTx tx0 _reqSigs _uindex slotRange) = do
  -- We start by gathering all the inputs and summing it
  let tx = tx0 {Pl.txFee = Pl.minFee tx0}
  lhsInputs <- mapM (outFromOutRef . Pl.txInRef) (S.toList (Pl.txInputs tx))
  let lhs = mappend (mconcat $ map Pl.txOutValue lhsInputs) (Pl.txMint tx)
  let rhs = mappend (mconcat $ map Pl.txOutValue $ Pl.txOutputs tx) (Pl.txFee tx)
  let wPKH = walletPKHash w
  let tgt = rhs Pl.- lhs
  (usedUTxOs, leftOver) <- balanceWithUTxOsOf (rhs Pl.- lhs) wPKH
  -- PROBLEMS BELOW!!
  -- All the UTxOs signed by the sender of the transaction and useful to balance it
  -- are added to the inputs.
  let txIns' = map (`Pl.TxIn` Just Pl.ConsumePublicKeyAddress) usedUTxOs
  -- A new output is opened with the leftover of the added inputs.
  let txOuts' = Pl.TxOut (Pl.Address (Pl.PubKeyCredential wPKH) Nothing) leftOver Nothing
  config <- asks mceSlotConfig
  return
    tx
      { Pl.txInputs = Pl.txInputs tx <> S.fromList txIns',
        Pl.txOutputs = Pl.txOutputs tx ++ [txOuts'],
        Pl.txValidRange = Pl.posixTimeRangeToContainedSlotRange config slotRange
      }

Bring mack the `MonadModal` class and add the `modify` combinator back.

Users should never use StartLtl and StopLtl explicitly, that is error prone and confusing. We should have a combinator that reads:

class MonadModal m where
   modify :: Ltl mod -> m a -> m a

instance MonadModal (Staged (LtlOp mod bin)) where
  modify f tr = startLtl f >> tr <* stopLtl

Now, somewhere and everywhere are defined as:

somewhere f = modify (eventually (LtlAtom f))
everywhere f = modify (forever (LtlAtom f))

eventually f = LtlTruth `LtlUntil` f
forever f = LtlFalsity `LtlRelease` f

This means that the user gets to rely only on modify.

Explore the property-based testing idea, which was started at `Cooked.Traces`

Disclaimer

This is an exploratory issue and what follows is a braindump; don't put too much effort trying to make things look like below, as long as we get a powerful mechanism of property-based testing, its irrelevant how it looks :)

The Problem with Plutus property-based checking

In Plutus, they created a whole complex beast for testing properties, based on quickcheck-dynamic. Yet, they made the mistake to rely on the EmulatorTrace and Contract monads and bake in the need to call the client-defined endpoints or have to write your own.
That's analogous to attesting the security of some service by saying "we clicked on all the UI buttons and nothing broke".

The Vision

We started defining the notion of a Trace as a number of steps that output transaction skeletons; the point of that is to enable us to write something that looks like property-based tests. We could have combinators such as:

someInterleavings 
        :: Tr a b -> a -- receives a trace and a seed;
        -> (b -> UtxoState -> Bool) -- and a predicate to test on the output of the trace
        -> TraceMonad Bool -- then runs the test on some variations of the given trace

or

equivMod -- checks that two traces are equivalent modulo some relation
  :: Tr a b -> Tr a b -> a
  -> ((b, UtxoState) -> (b, UtxoState) -> Bool)
  -> Bool

The TraceMonad here would have a collection of tr :: forall a . Tr a a which could be used to create interleavings with the current trace. It is paramount that we study the quickcheck-dynamic library from Plutus, we might even be able to use quite a lot from it.

Open Questions

  1. I originally made Tr :: * -> * -> * to be able to pass around system parameters that might be needed to instantiate some scripts; yet, I did not really explore alternatives. It's certainly worth thinking a little about this since it would suck to pay the price to having a Tr type that is more general than it needs to be.

  2. Is TraceMonad needed? I don't think so. But then, how do we choose what to interleave Tr a b with? Maybe we shouldn't be thinking of interleavings, but instead of trace generators. What if we define:
    someInterleavings :: (Arbitrary seed) => (seed -> Gen (Tr a b)) -> a -> (seed -> b -> UtxoState -> Bool) -> ...
    and now we can talk about trace generators; there's no need for interleaving whatsoever.

Misleading comments about circluar dependencies in the Auction contract

The first substantial comment in the source code of the auction contract example is misleading at best, and false at worst (in any case, it's confusing). It talks of breaking a circular dependency between a minting policy and a certain validator: In order for the contract to work, each has to know the other in order to ensure that

  • the NFT minted by the minting policy is paid to the correct validator, and
  • the validator knows the asset class of the NFT (which includes the hash of the minting policy as its currecny symbol).

That circle is broken by the code in the sense that, by submitting the hash of the validator as a redemer to the minting policy, the necessary information can be shared between the two scripts. It is not ensured, however, that the information in that redeemer is correct. In order to achieve this, each script's hash would need to be a parameter to the other script -- not a redeemer -- which is patently impossible. It's unclear to me if this problem has a solution.

We should:

  • reformulate the comment to include the observation above,
  • document the attack on the txOpen transaction, and
  • optionally explain what the "real" solution to the problem would be, or why there is no solution.

Write more elements of type `Constraint`

Currently, Cooked.Tx.Constraint only contains basic constraints. We should expand that with constraints over minted values and transaction validation time at least.

Understand and standardize `adjustUnbalancedTx`.

I believe there is a bug in adjustUnbalancedTx. It shouldn't filter the outputs that contain zero value, otherwise, when running a transaction generated by:

PaysToScript scriptAddr (someDatum, mzero)

the output that actually pays to the script will be silently filtered out, the transaction will be send and it might actually succeed; even though the transaction that was send DOES NOT satisfy the constraints it was generated from.

Another aspect to keep in mind is the fee and keep track of this issue: IntersectMBO/plutus-apps#143

Print depending on the use-case

There are essentially 2 situations where one run a trace:

  • One wants to inspect it. In this case, it is usually ran via a repl using Cooked.runMockChainFrom.
  • One runs the whole test suite. In this case, one usually run cabal run tweag-audit which internally uses Tasty.testProperty and Cooked.testSucceedsFrom/Cooked.testFailsFrom.

Those 2 use-cases expect a very different level of verbosity. Whereas when inspecting a specific case, I want to see as much information as possible, about the final distribution of the tokens, about the error which occurs or the transaction sent, when executing the whole test suite, I mainly care about the red and green lights and do not want those information to hide the relevant data.

Hence having the possibility to add an analogous to trace which only prints when ran with Cooked.runMockChainFrom would be very useful.

Convenience for bootstraping a mockchain with custom assets

During audits, some minting policies are not provided and it takes repetitive boilerplate code to specify them and then mint and distribute some assets among wallets before reaching the actual desired initial state where that currency is spread in the world alongside Ada.

For example this is a typical module to provide convenience functions regarding a currency that is supposed to already exist in the wild. We use a OneShotCurrency here because it is simple and we have an admin wallet mint everything and then distribute it among other wallets (this could be simplified and done in a single transaction without the "admin", it is just code taken from a recent audit where an admin actually exists).

adminCoin :: Cooked.Wallet
adminCoin = Cooked.wallet 1

adminCoinPKH :: Plutus.PubKeyHash
adminCoinPKH = Cooked.walletPKHash adminCoin

coinTotalAmount :: Integer
coinTotalAmount = 100_000_000

coinPolicy :: Plutus.MintingPolicy
coinPolicy =
  Plutus.curPolicy Plutus.$
    Plutus.OneShotCurrency
      (h, i)
      (Plutus.fromList [("coin", coinTotalAmount)])
  where
    Right ((h, i), _) = Cooked.runMockChain $ do -- TODO: make type safe
      [(Plutus.TxOutRef h' i', _)] <- Cooked.pkUtxos' adminCoinPKH
      return (h', i')

coinCurrency :: Plutus.CurrencySymbol
coinCurrency = Plutus.scriptCurrencySymbol coinPolicy

coinAsset :: Plutus.AssetClass
coinAsset = Plutus.assetClass coinCurrency "coin"

coinGeneration :: (Cooked.MonadMockChain m) => m ()
coinGeneration =
  Cooked.validateTxFromSkeleton $
    Cooked.txSkel
      adminCoin
      [ Cooked.mints [coinPolicy] oneHundredMillionCoin,
        Cooked.PaysPK adminCoinPKH oneHundredMillionCoin
      ]
  where
    oneHundredMillionCoin = Plutus.assetClassValue coinAsset 100_000_000

coin :: Integer -> Plutus.Value
coin = Plutus.assetClassValue coinAsset

-- | Distribute coins from admin to different pkh.
coinDistribution ::
  Cooked.MonadMockChain m =>
  -- | Quantity of coin to give to each recipient
  [(Plutus.PubKeyHash, Integer)] ->
  m ()
coinDistribution = Cooked.validateTxFromSkeleton . Cooked.txSkel adminCoin . map pays
  where
    pays (recipientPKH, quantity) = PaysPK recipientPKH (coin quantity)

It would be nice to be able to quickly specify:

  • a name for the tokens
  • an initial distribution of these among the known wallets :: [(Wallet, Integer)]
    And then be able to bootstrap a mockchain with the chosen initial distribution of freshly minted tokens and a set of helper functions mapping the chosen name to the token name, currency symbol, asset class, and value constructor.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.