Giter Club home page Giter Club logo

cameratransform's People

Contributors

alexanderwinterl avatar casper-smet avatar colinbrosseaualgolux avatar dependabot[bot] avatar foxsr avatar rgerum avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cameratransform's Issues

Sensor height parameter is ignored

Hey, thanks for the awesome work.

I have noticed the sensor height parameter is ignored. I can't achieve the expected results with CameraTransform and I wonder if this might be the reason.

Sample code bellow. No matter if sensor height is 1, 24 or 100, the code bellow will always output True.

import cameratransform as ct
import numpy as np

### Variables

sensor_size      = (35.9, 24.0)
#sensor_size      = (35.9, 1)
#sensor_size      = (35.9, 100)
image_size       = (7952, 5304)
focal_length     = 25.0
camera_latitude  = -36.46268
camera_longitude = 174.701975
camera_altitude  = 360.912796
yaw              = 77.773732 - 90
pitch            = 68.539041

coords = np.array([[174.70690997500003, -36.461052382999980, 35.97441482543945],
                   [174.70771318800007, -36.461040889999936, 35.79658508300781],
                   [174.70772919800004, -36.461770799999954, 35.75333786010742],
                   [174.70692597800007, -36.461782292999940, 35.95493698120117],
                   [174.70690997500003, -36.461052382999980, 35.97441482543945]])

### CameraTransform

projection = ct.RectilinearProjection(focallength_mm=focal_length, sensor=sensor_size, image=image_size)
orientation = ct.SpatialOrientation(heading_deg=yaw, tilt_deg=pitch)
cam = ct.Camera(projection, orientation)
cam.setGPSpos(camera_longitude, camera_latitude, camera_altitude)
imageCoords = cam.imageFromGPS(coords)

### Result

fixedResult = [[3456.16373929, 3115.19355121],
               [3652.4109201 , 2877.87336392],
               [4266.24878201, 2916.81081086],
               [4148.80805633, 3158.17452882],
               [3456.16373929, 3115.19355121]]

# Truncate the imageCoords to 8 decimals to compare with the fixedResult
for i in range(0,len(imageCoords)):
    imageCoords[i][0] = round(imageCoords[i][0], 8)
    imageCoords[i][1] = round(imageCoords[i][1], 8)
    
print(np.array_equal(np.array(imageCoords),np.array(fixedResult)))

How to georeference the top down image?

Thanks for the excellent library! I've been working to process some drone images that are intentionally about 40 deg off nadir and transform them to how they'd look top down so I can pull them into a GIS workflow and map them. Is there any way to take the output of cam.getTopViewOfImage() and properly georeference it? I have been trying with rasterio and a few other libraries, but I can't quite swing it. I need either the locations of all four corners of the image or a proper transform to put into rasterio. I feel like the GPS locations of the corners of the top view image should already be within cameratransform somewhere? Is there a good way to access this data? Or a suggested way for mapping that top down image?

You can see my image here as the output of cam.getTopViewOfImage():

ocean_color_image_reprojected

Thanks again for the helpful library!

no attribute deg2rad on getBearing

  • Python3 3.6.9
Traceback (most recent call last):
  File "./solve.py", line 45, in <module>
    tilt =  ct.getBearing((sat_lat,sat_lon), (statue[0], statue[1]))
  File "/usr/local/lib/python3.6/dist-packages/cameratransform/gps.py", line 315, in getBearing
    lat1, lon1, h1 = splitGPS(point1)
  File "/usr/local/lib/python3.6/dist-packages/cameratransform/gps.py", line 326, in splitGPS
    lat1 = np.deg2rad(x[..., 0])
AttributeError: 'Angle' object has no attribute 'deg2rad'

requirements missing opncv

Cameratransform in version 1.1 is missing opencv (cv2) in its requirements file.

Should we aim for an optional import to keep the dependencies simpler?
Opencv is only required to calculate top view projections as far as I am aware?

Rectilinear Projection init args view_x_deg/view_y_deg seem not correctly set

The following assertions unexpectedly don't pass:

projection = cameratransform.RectilinearProjection(
    view_x_deg=61.617,
    image=(4608, 3456)
)
assert abs(projection.getFieldOfView()[0] - 61.617) < 1e-5
projection = cameratransform.RectilinearProjection(
    view_y_deg=48.192,
    image=(4608, 3456)
)
assert abs(projection.getFieldOfView()[1] - 48.192) < 1e-5

Ray computation from orientation

Hi,

I might be wrong but I suspect a bug in the Ray computation from a camera orientation. It seems to me that the roll angle is not accounted for as when changing it does not affect the ray vector.

cc = ct.Camera(ct.RectilinearProjection(focallength_px=3000, image=(2000, 2000)),
                          ct.SpatialOrientation(elevation_m=10, tilt_deg=12,roll_deg=0,heading_deg=0))
offset, ray = cc.getRay([1000, 1000],normed=False)

Returns

array([ 0. , 0.20791169, -0.9781476 ])

and this:

cc = ct.Camera(ct.RectilinearProjection(focallength_px=3000, image=(2000, 2000)),
                          ct.SpatialOrientation(elevation_m=10, tilt_deg=12,roll_deg=20,heading_deg=0))
offset, ray = cc.getRay([1000, 1000],normed=False)

gives

array([-1.47684333e-17, 2.07911691e-01, -9.78147601e-01])

So besides the numerical precision, this is the same ray while the roll angle changed from 0 to 20 degrees. We should have a y component much bigger than previously (and respectively a smaller z component).

Did I miss something or is this a bug somewhere?

Thanks

gpsFromImage returns incorrect pixel location and truncates results

Hi! First off, thank you for this repo!

The gpsFromImage method seems to conflict with other python modules. I plan to pass in pixel location, but whenever I test your cameratransform module with the PyTorch yolov7 module, I get the belows results. For some reason, the gpsFromImage method returns truncated duplicate results.

gpsFromImage-output

Here is code:

import cameratransform.camera as ct
import yolov7
projection = ct.RectilinearProjection(image=(4608,3456),
focallength_x_px=7439.143084,
focallength_y_px=7439.143084,
center_x_px=2304,
center_y_px=1728,
sensor_width_mm=5.90,
sensor_height_mm=4.43)
orientation = ct.SpatialOrientation(elevation_m=9.8,
tilt_deg=0.0,
roll_deg=0.0,
heading_deg=180)

cam = ct.Camera(projection, orientation,)
cam.setGPSpos(lat=39.3166667, lon=-76.66775)

model = yolov7.load('best.pt')

model.conf = 0.25 # NMS confidence threshold
model.iou = 0.45 # NMS IoU threshold
model.classes = None # (optional list) filter by class

img = 'P0810234.JPG'

results = model(img, size=416, augment=True)

parse results
predictions = results.pred[0]
boxes = predictions[:, :4] # x1, y1, x2, y2
scores = predictions[:, 4]
categories = predictions[:, 5]

coords = cam.gpsFromImage([[2109., 1533., 2211., 1605.],
[1992., 1020., 2100., 1074.],
[2044., 3237., 2080., 3270.],
[1578., 1943., 1620., 1981.]])

print(coords)

results.show()

Any help is greatly appreciated.
Thank you.

Deforming lenses don't respect points behind the camera

Projecting a point behind the camera using imageFromSpace returns NaN for that point if hide_backpoints is True, but when those points are passed to a lens, NaNs get set to 0 E.g. in lens_distortion.py line 160:

        # set nans to 0
        points[np.isnan(points)] = 0

This causes two problems; first this places all back points in the centre of the visible image, producing incorrect results; secondly the user is the unable to differentiate back points from front points when the function returns. If no lens is specified in the camera, then the non-distorting lens just passes through the points unmolested.
It would be great to keep the back points as NaNs, or at least have it as an option. In my case, filtering out the points isn't desirable because I need the order and number of points to be maintained for later processing.

Poetry could fail with Python on Win10

I have tried to install newest version of cameratransform that now requires poetry. (Documentation update anyone?)
It turns out that one must install python with normal installation under Win10 that uses user's local folder for installation like
'%userprofile%\AppData\Local\Programs\Python\Python311'
If you install python with MS Store app, it will be sandboxed into 'C:\Users\shilo_i\AppData\Local\Packages\Python...' and this is not going smooth with poetry.

Unable to reproduce image results from test_fits.py

Hi. I've been exploring cameratransform to understand how to perform backprojections on a single image. I'm not obtaining the correct backprojected penguin points nor horizon line as shown in the documentation. I know the backprojected points are incorrect because backprojected horizon line is at the bottom of the image and backprojected landmarks are outside of the field of view:

My information fit image:
information_fit_metropolis1e3

My camera trace from metropolis algorithm:
camera_trace_metropolis1e3

The only changes I made to the original script are the addition of a line for CameraImage2.jpg image reading (via cv2.imread, and I also tried plt.imread) and replacing the matplotlib backend to one that works for me (via matplotlib.use('TkAgg')). A difference I noticed between the script and the documentation is that metropolis is set by default to run for 1,000 iterations vs 10,000, but even changing 1e3 to 1e4 to match the value in the documentation snippet I get the same wrong backprojections in the image.

I'm also confused as to why in this test first camera.metropolis(...) is used to estimate the camera extrinsic parameters but then camera.fit(...) uses the same initial values to obtain a completely different estimated values. The values with higher probability given by metropolis are the ones supposed to be the estimates for [elevation_m, rot_deg, tilt_deg, heading_deg] and with these fill in the extrinsic parameter matrices and be used for backprojection, right?

imageFromGPS doesn't return correct pixel location

imageFromGPS doesn't return correct pixel locations for my nadir pointing camera. I get nan's. If I come slightly off nadir, like 10 degrees, I get pixel locations way outside the image size. Even going up to 45 degrees tilt, I'm getting incorrect pixel locations.

Here's some example code of the problem:
import cameratransform as ct

imageWidth = 2048
imageHeight = 1536
pixelSizeMm = 0.00345

focalLengthMm = 8
sensorSizeMm = (imageWidthpixelSizeMm,
imageHeight
pixelSizeMm)

cam = ct.Camera(ct.RectilinearProjection(focallength_mm=focalLengthMm,
sensor=sensorSizeMm,
image=(imageWidth, imageHeight)),
ct.SpatialOrientation(elevation_m=20,
tilt_deg=0,
roll_deg=0,
heading_deg=0
))

lat = 33.8121
lon = -117.9190
el = 0
cam.setGPSpos(lat, lon, el)

centerPixel = [imageWidth/2, imageHeight/2, el]
centerGps = cam.gpsFromImage(centerPixel)
centerPixelFromGps = cam.imageFromGPS(centerGps)

print(f'Pixel: {centerPixel}')
print(f'Expected GPS: {lat} {lon} {el}')
print(f'Center GPS: {centerGps}')
print(f'Center Pixel: {centerPixelFromGps}')

And I get the following results for
0 tilt:
Pixel: [1024.0, 768.0, 0]
Expected GPS: 33.8121 -117.919 0
Center GPS: [ 33.8121 -117.919 0. ]
Center Pixel: [nan nan]

10 tilt:
Pixel: [1024.0, 768.0, 0]
Expected GPS: 33.8121 -117.919 0
Center GPS: [ 33.8121 -117.919 0. ]
Center Pixel: [ 1024. -12382.7984223]

45 tilt:
Pixel: [1024.0, 768.0, 0]
Expected GPS: 33.8121 -117.919 0
Center GPS: [ 33.8121 -117.919 0. ]
Center Pixel: [ 1024. -1550.84057971]

80 tilt:
Pixel: [1024.0, 768.0, 0]
Expected GPS: 33.8121 -117.919 0
Center GPS: [ 33.8121 -117.919 0. ]
Center Pixel: [1024. 359.12584184]

Project Maturity & Getting Started Question

Hi.
This project seems to have great documentation. I see that it is tagged with a 1.1 release. However I don't see many issues or activity on the project.

Is the software mature? Is it something that I should pick up and experiment with, or is it better for me to look elsewhere for a library to take me from camera into geodetic coordinates?

If the author (@rgerum) still supports this, then I am very excited to use this. I simply love the effort that I see here.

Thanks,
Dave

What parameters can the fitting be used for?

I have an application similar to Dave Sargrad, where I have a camera pointed at ground traffic of an airport. I have a sample video from an airport but have no camera information - specifically focal length and sensor size. Using Google maps, I can get get a reasonable estimate of where the camera is located, including heading_deg, tilt_def and roll_deg. Can I use all of this to fit the parameters for the focal length and sensor size, if I can find enough landmarks on the airfield surface?

Also, related question: How sensitive are the calculations to focal length and sensor size?

Thanks.

Validate possible combinations of init args in CameraProjection

Thanks for quickly fixing the issue #24.

Concerning the passing of redundant parameters I also observed the following:

projection = cameratransform.RectilinearProjection(
    view_x_deg=61.617,
    view_y_deg=41.617,
    sensor_width_mm=12, #  or  sensor_height_mm=12
    image=(4608, 3456)
)
assert abs(projection.getFieldOfView()[0] - 61.617) < 1e-5 # passes
assert abs(projection.getFieldOfView()[1] - 41.617) < 1e-5 # fails here in both cases

that may be a disorienting asymmetry. Ideally, in this case I would expect a ValueError to be raised in order to protect against possible abuses of the API.

Without changing the current API, I think it could be already beneficial to refactor the logic that mixes-and-matches the parameters in the CameraProjection __init__() into individual helper methods that validate the input on a case-by-case basis.

Geolocalization of Drone Image

Thank you so much for your wonderful library first.
Despite various tests, I find myself having problems in the geolocation of an image taken through the use of a drone.
Using the following code I have problems with image rotation and I was wondering if by chance I was misusing the tilt_deg, roll_deg and heading_deg parameters within SpatialOrientation.
Do you by any chance know where I'm wrong?
Thank you in advance


import cameratransform as ct

metadata = {
    "ImageWidth": 640,
    "ImageHeight": 512,
    "FocalLength": "19.0 mm",
    "RelativeAltitude": 87.599998,
    "GimbalRollDegree": 0,
    "GimbalYawDegree": 0.6,
    "GimbalPitchDegree": -90,
    "FlightRollDegree": -0.6,
    "FlightYawDegree": 89.300003,
    "FlightPitchDegree": 0.4,
    "GPS GPSLatitude": -17.25213463888889,
    "GPS GPSLongitude": -71.18490022222223,
}

image_size = (metadata["ImageWidth"], metadata["ImageHeight"])
sensor_width_mm = 7.44
sensor_height_mm = 5.55
sensor_size = (sensor_width_mm, sensor_height_mm)
focal_length = float(metadata["FocalLength"][:-3])
tilt_deg = metadata["FlightPitchDegree"]
roll_deg = metadata["FlightRollDegree"]
heading_deg = metadata["FlightYawDegree"]
relative_altitude = metadata["RelativeAltitude"]
lat = metadata["GPS GPSLatitude"]
lon = metadata["GPS GPSLongitude"]
cam = ct.Camera(
    ct.RectilinearProjection(focallength_mm=focal_length, sensor=sensor_size, image=image_size),
    ct.SpatialOrientation(
        elevation_m=relative_altitude,
        tilt_deg=tilt_deg,
        roll_deg=roll_deg,
        heading_deg=heading_deg,
    ),
)
cam.setGPSpos(lat, lon, relative_altitude)

center_pixel = [image_size[0]/2, image_size[1]/2, relative_altitude]
cam.gpsFromImage(center_pixel)

Negative angles troubles.

Hello. Thankyou for your great work.
Unfortunately we have some issues and it depends from some leaks in docs.

So, we use roll-pitch-yaw system.
and we have

roll= -4
pitch= -11.6
yaw= 96

And we have Cesium geospatial system they use the same and adapt some values

roll= 356
pitch= -11.6
heading=96

And we decide to use cameratransform lib instead. We start from comparing horizon and get some issues.

roll= -4
pitch= -11.6
yaw= 96

with these value we get mirrored horizon, and only this these

roll= 4
pitch= 11.6
yaw= 96

all became correct.
So can you explain documentation more about angles and correct degrees for them. And ofc formula tilt_deg = 90 - pitch_deg seems incorrect for negative values

Sign change of x in getRay()

In RectilinearProjection function getRay(), there is a sign change for the X coordinate which doesn't appear in the doumentation:

        ray = np.array([-(points[..., 0] - self.center_x_px) / self.focallength_x_px,
                        (points[..., 1] - self.center_y_px) / self.focallength_y_px,
                        np.ones(points[..., 1].shape)]).T

Why the minus sign in front of -(points[..., 0] - self.center_x_px) ? The functions docstring says no minus:

    .. math::
        \vec{r} = \begin{pmatrix}
            (x_\mathrm{im} - c_x)/f_x\\
            (y_\mathrm{im} - c_y)/f_y\\
            1\\
        \end{pmatrix}

I think this makes the heading angle be defined as positive CW, instead of CCW as shown in the 3D representations of the Coordinate Systems on readthedocs. However, I'm wondering if this is the only impact or it may have some other unintended consequences down the road.

gpsFromImage returning nan

Thanks for your hard work on this library and the great documentation. We have been experimenting with the gpsFromImage function trying to get real world GPS coords from a point in the image. We have run into an issue where some points in the image are returning "nan" and we are not sure what is causing that.

Here is an example:

import cameratransform.camera as ct
from cameratransform.lens_distortion import BrownLensDistortion

projection = ct.RectilinearProjection(image=(1280, 720),
                                      focallength_x_px=722.7379,
                                      focallength_y_px=726.1398,
                                      center_x_px=663.8862,
                                      center_y_px=335.0579,
                                      sensor_width_mm=4.96,
                                      sensor_height_mm=3.72)
orientation = ct.SpatialOrientation(elevation_m=2.0,
                                    tilt_deg=90.0,
                                    roll_deg=0.0,
                                    heading_deg=358.950422)
distortion = BrownLensDistortion(k1=-0.3581,
                                 k2=0.1039,
                                 projection=projection)

cam = ct.Camera(projection, orientation, distortion)

cam.setGPSpos(lat=40.55660, lon=-111.89720)
coords = cam.gpsFromImage([[288.0, 228.0]])
print(coords)

the results of this print:

[array(array([nan]), dtype=object) array(array([nan]), dtype=object)
 array([nan])]

We have successfully used the library for other points. Is this expected behavior and we are just using the library incorrectly? Any help you could provide would be appreciated.

Thanks,
Greg

Question :

Hi,
I am currently working on Deep single image calibration. I want to train a CNN to predict distortion parameters : focal, pitch, roll and distortion parameters : k1, k2 of the radial distortion model : xd=x*(1+k1r2+k2r4). Is there any code to generate traning images from panorama with random parameters (focal, pitch, roll, k1, k2)?

Position problem

Thank you for this great library, which I hope it will help me a lot in my project.

I am working on a project which involve converting detected features from images to GIS features.

When I am passing the pixel locations to gpsFromImage function, I am getting very condensed locations or nan.

Please find bellow the implementation which I am doing with the details to reproduce, I am also attaching the image used.

Your help is highly appreciated.


img_width=2048
img_height=2464

f=4.4
f_pixel=2029.54955
focallength_x_pxv=1023.5
focallength_y_pxv=2029.54955

camFlir = ct.Camera(ct.RectilinearProjection(focallength_mm=f,
sensor=sensor_size,
image=image_size,
center=(focallength_y_pxv,focallength_x_pxv)
),
ct.SpatialOrientation(elevation_m=52, heading_deg=0 )) #, heading_deg=85

width=2048
height=2464
lat=29.29668106
long=48.01361047

loc1X = 1646.0
loc1Y = 1390.0

l1X = width - loc1X
l1Y = height - loc1Y
print (l1X,l1Y)

camFlir.setGPSpos(lat, long,0.01)

#check if the function give the correct location for the camera

origin = camFlir.gpsFromImage(np.array([width,height]))

#working for the position but it is in the same location as the camera position

origin = camFlir.gpsFromImage(np.array([l1X,l1Y]))

#not working eventhough it is within the same image space
origin = camFlir.gpsFromImage(np.array([900,900]))

print (lat,long,origin[0],origin[1])

190328_071729394_Camera_0

patch for spatial.py

Thank you for great job.
I found bug in spatial.py, need to fix.

line 107~
def _initCameraMatrix(self, height=None, tilt_angle=None, roll_angle=None):
if self.heading_deg < -360 or self.heading_deg > 360: # pragma: no cover
self.heading_deg = self.heading_deg % 360
# convert the angle to radians
tilt = np.deg2rad(self.parameters.tilt_deg)
roll = np.deg2rad(self.parameters.roll_deg)
heading = np.deg2rad(self.parameters.heading_deg)

    # get the translation matrix and rotate it
    self.t = np.array([self.parameters.pos_x_m, self.parameters.pos_y_m, self.parameters.elevation_m])

    # construct the rotation matrices for tilt, roll and heading
    self.R_tilt = np.array([[1, 0, 0],
                            [0, np.cos(tilt), np.sin(tilt)],
                            [0, -np.sin(tilt), np.cos(tilt)]])
    self.R_roll = np.array([[np.cos(roll), 0, -np.sin(roll)],
                            [0, 1, 0],
                            [np.sin(roll), 0, np.cos(roll)]])
    self.R_head = np.array([[np.cos(heading), -np.sin(heading), 0],
                            [np.sin(heading), np.cos(heading), 0],
                            [0, 0, 1]])

    self.R = np.dot(np.dot(self.R_tilt, self.R_roll), self.R_head)
    self.R_inv = np.linalg.inv(self.R)

reference : https://en.wikipedia.org/wiki/Rotation_matrix

Some GUI packages are missing from poetry config file/bug fix

It is not possible to run 'gui_demonstrator.py' after installation since some packages are missing:
Have to call those ones:

  • poetry add qtpy
  • poetry add pyqt5
  • poetry add qimage2ndarray

In addition to that had to fix QtShortCuts.py since slider does not support float values but integers only. Also spinbox type has to processed correctly on setValue (int or float value).

One has to change in class QInputNumber(QInput):

def _doSetValue(self, value):
        v = value
        if self.decimals == 0:
            v = int(value)
        self.spin_box.setValue(v)
        if self.slider is not None:            
            self.slider.setValue(int(value * self.decimal_factor))

Trying to understand the fitting plots

I have used the fitting functionality to try and find reasonably values for some of the parameters. Below is the plot that resulted. I don't understand what the red and black lines are. The output of the analysis seems to be the black line.

image

8996 87.195699 270.120662 -135644.584362
8997 87.195699 270.120662 -135644.584362
8998 87.195699 270.120662 -135644.584362
[8999 rows x 3 columns]
Trace 8999
tilt_deg 87.19±0.011
heading_deg 270.07±0.031
B

See also #12 and #13

Image coordinates to World coordinate conversion using Camera Calibration

I am using a standard 640x480 webcam. I have done Camera calibration in OpenCV in Python 3. Now I have Intrinsic Parameters, Extrinsic Parameters, and Distortion Coefficients.
How can I find the location of a point in world-coordinates in a plane in millimeters from my scene image. I have attached the webcam above a table horizontally and on the table, a Robotic arm is placed. Using the camera I found the centroid of an object. Now, using Camera Matrix my goal is to convert the location of that object (e.g. 300x200 pixels) to the millimeter units so that I can give the millimeters to the robotic arm to pick that object. I assume Z=0 for world coordinate because my object is placed on a flat surface horizontal to the Camera. Please tell me how can I do this using your library. Thanks a lot!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.