Giter Club home page Giter Club logo

volume-cartographer's People

Contributors

ambe262 avatar chaoduuky avatar csparker247 avatar danieldegroot2 avatar dpenning avatar get9 avatar graczyk avatar hatchh avatar james-pack avatar kglspl avatar kristinagessel avatar kyraseevers avatar maekclena avatar phphavok avatar schillij95 avatar snbe225 avatar spelufo avatar stephenrparsons avatar thecleric avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

volume-cartographer's Issues

[Bug] PPM error when loading cell map > 1GB

What happened?

big-cellmap

Steps to reproduce

No response

Version

No response

How did you install the software?

None

On which operating systems have you experienced this issue?

  • macOS
  • Windows
  • Linux

Relevant log output

No response

Code of Conduct

  • I agree to follow this project's Code of Conduct

[Bug] Ambiguity for layer stack coordinate positions

Layer generation with vc_layers or vc_layers_from_ppm uses the LineGenerator class to compute a vector of intensities for each pixel in the PPM, then saves each slice along those accumulated vectors as a separate image. Assumably, one could reconstruct the original 3D coordinates corresponding to each voxel in the layer stack like so:

pt = ppm[y, x, :3]
n = ppm[y, x, 3:6]
offset = layer_idx - num_layers // 2
pos =  pt + offset * n 

However, this calculation is only accurate if the radius value for the neighborhood is an integer value. Consider layer stacks generated with a radius of 1.2 and 1.5 (assumed bidirectional sampling):

interval = 1.
for r in [1.2, 1.5]:
  # floor(r - -r / interval) + 1
  num_layers = floor(r + (r / interval)) + 1
  # offsets = min_r + idx * interval
  print([-r + idx * interval for idx in range(num_layers)])  # [-1.2, -0.2, 0.8]
                                                             # [-1.5, -0.5, 0.5, 1.5]

With these values, the algorithm above for reconstructing the original position would be incorrect voxel:

for num_layers in [3, 4]:
  print([i - num_layers//2 for i in range(num_layers)])  # [-1, 0, 1]
                                                         # [-2, -1, 0, 1]

The reconstruction code above will overestimate radius values with fractional part $\{r\} \in [0.5, 1)$ by $1 - \{r\}$, while $\{r\} \in (0, 0.5)$ will underestimate by $1 - \{r\}$.

Unfortunately, I don't think the 3D positions in a layer stack can be exactly reconstructed without also recording the radius value used when generating the layers. As this is sometimes auto-determined from the voxel size and estimated layer thickness, it's not guaranteed to be an integer value.

I'm not sure that this is exactly a bug, per se, but something that should be documented and improved. I do think it would be helpful if the surface voxel was always exactly recorded when doing bidirectional sampling, but that would likely mean adjusting the requested radius in order to maintain a uniform sampling interval. Maybe that's fine? That seems fraught when the sampling interval is also not an integer, though.

Anyway, I'm willing to take thoughts or suggestions on this topic. Since the error is so small and easily solved with some bookkeeping, I don't think this is a critical issue, and we have some time to think about implications.

Re-implement multi-volume imports in vc_packager

What problem is your feature request solving?

Pull request #62 removed the ability to import multiple volumes with a single vc_packager command, so that it could change the volume information inputs from stdin to program arguments.

What is your feature request?

Reimplement this functionality using program arguments.

What alternative solutions have you considered?

No response

Is there anything else we should know that wasn't included already?

No response

Are you proposing to work on this feature yourself?

  • I am willing to submit a pull request for this feature

Code of Conduct

  • I agree to follow this project's Code of Conduct

Application breaks when using segmentation tool

I've followed tutorial3 in Vesuvius Challenge. However, when I use the segmentation tool in the following step

guide

the application breaks, here's a short film

capture.mp4

.
.
Not sure if it's relevant, but this is how I installed it 👉 #5
And my device information:

macOS: Ventura 13.2.1
CPU: 2.3 GHz dual Intel Core i5
GPU: Intel Iris Plus Graphics 640

btw, thanks to the people behind the scenes and their difficult progress in the past to achieve this challenge 🙏
Can't wait to see what happen in the next few years 🚀

(VC) Add support for algorithms which propagate in -Z

I recommend updating this function as well.
added bonus:
-Compatible with backwards segmentation
-Not deleting segmentation process above current segmentation job
-Displaying target segmentation layer when finishing the segmentation job.

void CWindow::onSegmentationFinished(Segmenter::PointSet ps)
{
    setWidgetsEnabled(true);
    worker_progress_updater_.stop();
    worker_progress_.close();
    // 3) concatenate the two parts to form the complete point cloud
    // find starting location in fMasterCloud
    int i;
    for (i= 0; i < fMasterCloud.height(); i++) {
        auto masterRowI = fMasterCloud.getRow(i);
        if (ps[0][2] <= masterRowI[fUpperPart.width()-1][2]){
            break;
        }
    }

    // remove the duplicated point and ps in their stead. if i at the end, no duplicated point, just append
    fUpperPart = fMasterCloud.copyRows(0, i);
    fUpperPart.append(ps);

    // check if remaining rows already exist in fMasterCloud behind ps
    for(; i < fMasterCloud.height(); i++) {
        auto masterRowI = fMasterCloud.getRow(i);
        if (ps[ps.size() - 1][2] < masterRowI[fUpperPart.width()-1][2]) {
            break;
        }
    }
    // add the remaining rows
    if (i < fMasterCloud.height()) {
        fUpperPart.append(fMasterCloud.copyRows(i, fMasterCloud.height()));
    }

    fMasterCloud = fUpperPart;

    statusBar->showMessage(tr("Segmentation complete"));
    fVpkgChanged = true;

    // set display to target layer
    fPathOnSliceIndex = fSegParams.targetIndex;
    CleanupSegmentation();
    UpdateView();
}

Originally posted by @schillij95 in #25 (comment)

Discontinuous normals in ppm files

What happened?

I was testing some code to read ppm files. This code would read the ppm file, use the xyz information in the file to create a mesh in xyz space, and use the normal information in the file to create lines in xyz space extending from the mesh. After hiding most of the mesh and most of the normals (in order to create a picture I could make sense of), I got this:
discontinuous_normals
I noticed there were a number of discontinuities, which seemed to be located at the boundaries of the triangles from the original obj mesh that was used to create the ppm file.
I dug into the VC code, and in PPMGenerator.cpp came across the PhongNormal function, which contains the following lines:

 return cv::normalize(
        (1 - nUVW[0] - nUVW[1]) * nA + nUVW[1] * nB + nUVW[2] * nC);

This formula seems odd to me; I would have expected:

nUVW[0] * nA + nUVW[1] * nB + nUVW[2] * nC);

I'm not currently in a position to compile and run the VC code, so I cannot test whether my version would solve the discontinuous-normal problem. But in any case, I believe the discontinuous normals could be the cause of the "sharkbite" problem that was previously reported (bug #33).

Steps to reproduce

No response

Version

No response

How did you install the software?

None

On which operating systems have you experienced this issue?

  • macOS
  • Windows
  • Linux

Relevant log output

No response

Code of Conduct

  • I agree to follow this project's Code of Conduct

[Feature] Add associated volume to PPMs

What problem is your feature request solving?

PPMs are often generated from surfaces which have associated volume ID's (i.e. from VC Segmentations as opposed to some random input mesh). However, this information is lost in the PPM encoding. As a consequence, the vc_*_from_ppm utilities generally require the --volume flag in order to function correctly.

What is your feature request?

Add associated volume metadata into the PPM header. The setter and getter will be easy to add to the PerPixelMap class. Changing the PointSetIO interface to support extra metadata fields in the header is more complicated, but not impossible (maybe something with std::optional or std::map or whatever). Updating usages will also require some attention to detail. But generally fairly easy.

What alternative solutions have you considered?

We could make the --volume flag required rather than defaulting to the first one. But that makes single volume .volpkg more annoying to work with and ultimately just makes the user work harder.

Is there anything else we should know that wasn't included already?

No response

Are you proposing to work on this feature yourself?

  • I am willing to submit a pull request for this feature

Code of Conduct

  • I agree to follow this project's Code of Conduct

[Bug] GCC 13 compilation error

What happened?

It appears that GCC 13 started enforcing stricter adherence to the C++ standard and we now must include cstdint in order to have access to the builtin types. VC suffers from this issue in both its own codebase

In file included from /run/media/xxxx/work/vesuvius/volume-cartographer/build/core/Version.cpp:1:
/run/media/xxxx/work/vesuvius/volume-cartographer/core/include/vc/core/Version.hpp:19:35: error: ‘uint32_t’ does not name a type
   19 |     static auto VersionMajor() -> uint32_t;
      |                                   ^~~~~~~~
/run/media/xxxx/work/vesuvius/volume-cartographer/core/include/vc/core/Version.hpp:6:1: note: ‘uint32_t’ is defined in header ‘<cstdint>’; did you forget to ‘#include <cstdint>’?
    5 | #include <string>
  +++ |+#include <cstdint>

as well as in its smgl in-source dependency.

In file included from /run/media/xxxx/work/vesuvius/volume-cartographer/build/_deps/smgl-src/smgl/src/Uuid.cpp:1:
/run/media/xxxx/work/vesuvius/volume-cartographer/build/_deps/smgl-src/smgl/include/smgl/Uuid.hpp:20:18: error: ‘uint8_t’ does not name a type
   20 |     using Byte = uint8_t;
      |                  ^~~~~~~
/run/media/xxxx/work/vesuvius/volume-cartographer/build/_deps/smgl-src/smgl/include/smgl/Uuid.hpp:7:1: note: ‘uint8_t’ is defined in header ‘<cstdint>’; did you forget to ‘#include <cstdint>’?
    6 | #include <string>
  +++ |+#include <cstdint>
    7 | 

Steps to reproduce

  1. Install GCC 13 and make it the default compiler
  2. Follow the instructions in https://github.com/educelab/volume-cartographer?tab=readme-ov-file#compilation to build from source.

Version

HEAD

How did you install the software?

Built from source

On which operating systems have you experienced this issue?

  • macOS
  • Windows
  • Linux

Relevant log output

In file included from /run/media/xxxx/work/vesuvius/volume-cartographer/build/core/Version.cpp:1:
/run/media/xxxx/work/vesuvius/volume-cartographer/core/include/vc/core/Version.hpp:19:35: error: ‘uint32_t’ does not name a type
   19 |     static auto VersionMajor() -> uint32_t;
      |                                   ^~~~~~~~
/run/media/xxxx/work/vesuvius/volume-cartographer/core/include/vc/core/Version.hpp:6:1: note: ‘uint32_t’ is defined in header ‘<cstdint>; did you forget to ‘#include <cstdint>’?
    5 | #include <string>
  +++ |+#include <cstdint>
In file included from /run/media/xxxx/work/vesuvius/volume-cartographer/build/_deps/smgl-src/smgl/src/Uuid.cpp:1:
/run/media/xxxx/work/vesuvius/volume-cartographer/build/_deps/smgl-src/smgl/include/smgl/Uuid.hpp:20:18: error: ‘uint8_t’ does not name a type
   20 |     using Byte = uint8_t;
      |                  ^~~~~~~
/run/media/xxxx/work/vesuvius/volume-cartographer/build/_deps/smgl-src/smgl/include/smgl/Uuid.hpp:7:1: note: ‘uint8_t’ is defined in header ‘<cstdint>’; did you forget to ‘#include <cstdint>’?
    6 | #include <string>
  +++ |+#include <cstdint>
    7 | 


### Code of Conduct

- [X] I agree to follow this project's Code of Conduct

[Bug] Layers (0-64) render in the wrong dimension

What happened?

I tried to render an 88 keV Scroll 2 segment. This is the segmenter team's first time rendering 88 keV scroll data. The output by vc_render looks as expected.
image

But the layers (0.tif-64.tif) appear masked in the wrong dimension.

32.tif, for example:
image

Note: when copying the images here, I rotated each 90 degrees clockwise.

Steps to reproduce

These images were generated using the following command: export SLICE=20230801194757 && cd /Scroll2.volpkg/paths/${SLICE} && nice vc_convert_pointset -i pointset.vcps -o "${SLICE}_points.obj" && nice vc_render --mesh-resample-smoothing 3 -v ../../ -s "${SLICE}" -o "${SLICE}.obj" --output-ppm "${SLICE}.ppm" && mkdir -p layers && nice vc_layers_from_ppm -v ../../ -p "${SLICE}.ppm" --output-dir layers/ -r 32 -f tif --cache-memory-limit 50G && vc_area ../.. ${SLICE} | grep cm | awk '{print $2}' | tee area_cm2.txt && echo 'david' > author.txt

Version

volume-cartographer 2.25.0 (Untracked)

How did you install the software?

Docker

On which operating systems have you experienced this issue?

  • macOS
  • Windows
  • Linux

Relevant log output

[2023-08-10 20:00:16.260] [volcart] [info] Loading file...
[2023-08-10 20:00:16.298] [volcart] [info] Loaded PointSet with 849849 points
[2023-08-10 20:00:16.359] [volcart] [info] Writing to OBJ...
[2023-08-10 20:00:18.136] [volcart] [info] File written: 20230801194757_points.obj
[2023-08-10 20:00:18.225] [volcart] [info] Loading VolumePkg: Scroll2.volpkg
[2023-08-10 20:00:18.229] [volcart] [info] Created new Render graph in VolPkg: 20230810200018
[2023-08-10 20:00:32.409] [volcart] [info] ACVD: Input: 849849 verts, 1696000 faces
[2023-08-10 20:00:33.614] [volcart] [info] ACVD: Performing isotropic mesh resampling...
[2023-08-10 20:00:37.736] [volcart] [info] ACVD: Computing quadrics optimization...
[2023-08-10 20:00:38.190] [volcart] [info] ACVD: Output: 16211 verts, 31822 faces
[2023-08-10 20:00:38.465] [volcart] [info] Solving ABF++
[2023-08-10 20:00:43.526] [volcart] [info] ABF++ Iterations: 3 || Final norm: 0.00049774
[2023-08-10 20:00:43.526] [volcart] [info] Solving LSCM
[2023-08-10 20:00:44.356] [volcart] [info] L2 Norm: 1.0005, LInf Norm: 1.4752
Requested to load slice... (1-1000)
Volume Cache :: Capacity: 221 || Size: 50GB
Loading PPM...
Generating layers...
Requested to load slice... (1-1000)
Writing layers...
3.14556

Code of Conduct

  • I agree to follow this project's Code of Conduct

[Bug] Sharkbite Bug - all segmentations have a visual bug impacting ink detection training.

What happened?

Looks like a meshing bug maybe - there are visual artefacts throughout every segment we have done so far, which will prevent the correct training of ink detection algorithms. Image posted in the discord.

This has been isolated as an artefact of moving the segmentation line too far on a given segment.

This still needs to be carefully debugged/understood, so we can prevent it from happening.

There is a smoothing function to mitigate this to some extent, but we need to know what it is doing - Laplacian will damage our data.

Steps to reproduce

All segmentations.

Version

all versions

How did you install the software?

both source and docker builds for segmentation, only docker builds for running vc_render

On which operating systems have you experienced this issue?

  • macOS
  • Windows
  • Linux

Relevant log output

No response

Code of Conduct

  • I agree to follow this project's Code of Conduct

[Bug] Resolved edge pair already paired during flattening

What happened?

Some segments made with VC fail to render, without a clear way to fix them. There are a couple different error messages that appear, and I will attempt to collate them here, however the most common is:

[volcart] [error] Resolved edge pair already paired. Edge (17246, 17214) is not 2-manifold.

Steps to reproduce

It is unclear what exactly produces the errors, we think it may be one dot on top of another.

Version

any version

How did you install the software?

None

On which operating systems have you experienced this issue?

  • macOS
  • Windows
  • Linux

Relevant log output

[volcart] [error] Resolved edge pair already paired. Edge (17246, 17214) is not 2-manifold.

Code of Conduct

  • I agree to follow this project's Code of Conduct

[Bug] VC Packager detects MacOS metadata files as actual files

What happened?

When running vc-packager my very first time as instructed in the segmentation tutorial against the campfire zip file, I could not get it to work. I would consistently get an error stating that:

Found 477 files which did not match the initial slice:3s<00m:00s] 954/954                                                                                                                                            
<LONG LIST OF FILES HERE>
ERROR: Slices in slice directory do not have matching properties (width/height/depth)

After a lot of banging my head against the wall I realized it was because MacOS on a non-HFS formatted drive (such as the ExFAT external disk I was using or a samba share) writes a metadata file for each actual file it performs IO on. For example for the file /campfire/rec/0168.tif that was extracted from the zip file, there was a corresponding metadata file of /campfire/rec/._0168.tif`.

It seems the vc-packager tries to parse these just as it would other tif files (and understandably so, though this is less than desirable).

Steps to reproduce

  1. Extract the campfire.zip to a non-HFS disk (such as an external drive or samba share) on MacOS.
  2. Attempt to run vc-packager against the unzipped data, such as vc_packager -s ./campfire/rec -v ./campfire.volpkg -u 104 -n campfire
  3. Receive the described error.

Version

2.26.0-rc.3 (according to homebrew as vc_version doesn't exist in my install)

How did you install the software?

Homebrew

On which operating systems have you experienced this issue?

  • macOS
  • Windows
  • Linux

Relevant log output

No response

Code of Conduct

  • I agree to follow this project's Code of Conduct

Need to install libtiff manually

I've followed tutorial3 in Vesuvius Challenge and installed Volume Cartographer with the following command on Mac

brew install --no-quarantine educelab/casks/volume-cartographer

This command run successfully and I can see Volume Cartographer appear in my application folder. However, when I try to run

vc_render --help

The following error occur:

dyld[4064]: Library not loaded: /usr/local/opt/libtiff/lib/libtiff.5.dylib
Referenced from: <9298E93C-EB87-3AFD-9366-F05D82C15B87> /usr/local/Caskroom/volume-cartographer/2.24.0/bin/vc_render
Reason: tried: '/usr/local/opt/libtiff/lib/libtiff.5.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/usr/local/opt/libtiff/lib/libtiff.5.dylib' (no such file), '/usr/local/opt/libtiff/lib/libtiff.5.dylib' (no such file), '/usr/local/lib/libtiff.5.dylib' (no such file), '/usr/lib/libtiff.5.dylib' (no such file, not in dyld cache)
[1]    4064 abort   vc_render --help

Seems I need to install libtiff to run this app, so I install it manually and after that vc_render --help command works now.

brew install libtiff

Not sure if it's an issue. If not or I'm doing something wrong, please let me know and I will close it later, thanks!

RAM/swap usage issues

  • while segmenting ram spins up to 95% and stays there even when not segmenting, until VC is closed.
  • while segmenting swap continues to fill, up to about 8G, and does not empty when VC closes.

Cleanly handle volumes missing slices

Since the volume is lazy loaded at runtime, missing slices can cause a number of unexpected crashes across our various programs. Need to review the current behavior on image load and its effect on the apps. Might also want to add a verification util or something.

UV map auto-orientation

Discussed in #49

Originally posted by csparker247 October 31, 2023
We would like to auto-orient the UV map with respect to at least the volume (i.e. no frame of reference except the volume shape), but also landmarks in the volume (e.g. top, bottom, front, back, etc.).

The appropriate place to fix UV map auto-orientation for general objects (i.e. without any frame of reference) is in the FlatteningAlgorithm::orient_uvs_ method so that it's available to all subclasses. That's the base class for all UV algorithms, and all subclasses would need to call it inside their own compute() functions. The current implementation (which tried to align the z-axis to the v-axis) is not used because it was pretty unreliable in practice. At the moment, the current UV Map orientation is determined by the two pinned edges in the OpenABF LSCM code, which is functionally arbitrary.

Auto-orientation of the UV map with respect to landmarks should be implemented as a separate function/class that takes a UV map and landmarks as input and outputs an updated UV map. We really want to auto-orient the text in a readable direction. Once a single piece of text is found and oriented, all other text should be auto-orientable as a result.

[Feature] remember vc_render params in the metadata json

What problem is your feature request solving?

It's inconvenient to keep a list of vc_render params for each separate segment, to rotate / flip the segment into an orientation in which text is readable.

What is your feature request?

It would be nice if you can specify params in the JSON file for each segment, and have vc_render use those params as a default. That way you can easily run vc_render on all segments (e.g. for f in paths/; do vc_render …) and have all the outputs be oriented properly.

What alternative solutions have you considered?

Keeping a separate list of flags for vc_render for each segment. Or a separate file that contains the flags or so.

Is there anything else we should know that wasn't included already?

Not super high priority, just something nice to consider.

Are you proposing to work on this feature yourself?

  • I am willing to submit a pull request for this feature

Code of Conduct

  • I agree to follow this project's Code of Conduct

(VC) Improve segmentation start behavior

One last low priority thing would be to make the enter key work as 'start' in the Ending Slice box, the way enter works in the choose slice box.

Originally posted by @hariseldon137 in #28 (comment)

Related to the above request, the behavior of tabbing through the segmentation options can be improved. For example, hitting tab while focused on the Ending Slice entry box does not transfer focus to the Start button.

(VC) Improve image navigation

There are some leftover requests from #14 that haven't been addressed yet. These didn't really make sense with the keyboard shortcuts, so moving them here:

  • remap alt to ctrl for scrolling X axis
  • ctrl + mouse for X-axis panning

Scrolling in the image area is non-trivial to fix as there's default behavior defined by Qt/the OS that we have to consider. For example, I use an Apple Magic Mouse with an XY track pad that has no difficulty scrolling in the image area as is. We want to support the above while not breaking these default behaviors.

(VC) Add key bindings

From Hari_Seldon:

LR arrows for previous/next (shift modifier for x10)
+,- for zoom in/out (no ctrl)
[,] for decrease/increase impact range
remap alt to ctrl for scrolling X axis
a/d for previous/next (shift modifier for jumping by 10)
w/s for zoom in/out
q/e for decrease/increase impact range
ctrl + mouse for X-axis panning

vc_render segfaults on Apple Silicon

vc_render non-deterministically segfaults on Apple Silicon devices. To consistently reproduce, try to default texture any segmentation:

vc_render -i Testing.volpkg -s local-reslice-particle-sim

My debugging thus far shows that this command segfaults in graph::CalculateNumVertsNode::compute() because the mesh_ pointer is NULL. I haven't determined why this is.

The error does not occur in Debug builds (at least I haven't had it happen yet), but does occur in Release and RelWithDebInfo builds. I have not been able to reproduce the issue at all on Apple Intel devices.

If you encounter this issue, try running your command again as the issue seems to be non-deterministic. If you find a command that consistently reproduces this issue, please let us know.

Row duplicates in ordered point sets

Users are reporting that VC ordered point sets have duplicate rows for a given slice number.

First thought is that this is caused by pointset merging, after editing, but it could be caused by the algorithms themselves.

[Feature] Add mesh support to vc_area

What problem is your feature request solving?

vc_area currently only supports segmentations stored inside a .volpkg file. However, we mostly want to know the surface area of the meshes that are output by vc_render.

What is your feature request?

vc_area should be extended to add support for mesh files. This should be a fairly trivial addition. We could also consider dropping support for segmentations entirely, since measuring their surface area is highly dependent on the meshing and we currently don't allow any control over meshing in vc_area.

What alternative solutions have you considered?

No response

Is there anything else we should know that wasn't included already?

No response

Are you proposing to work on this feature yourself?

  • I am willing to submit a pull request for this feature

Code of Conduct

  • I agree to follow this project's Code of Conduct

Improve Docker workflow triggers

There are a couple of ways that I would like Docker workflow triggers to be improved:

  1. If an edge build is running for develop and develop gets a new push, cancel the current run.
  2. If an edge build is running for develop and develop gets a new release tag, finish the current run, then run the release tag build (since cached layers get reused)
  3. If the build-docker label is added to a PR, build the Docker image for the current branch, upload it (maybe? just want it for testing), and remove the label.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.