Giter Club home page Giter Club logo

blensor's People

Contributors

aligorith avatar andresusanopinto avatar ben2610 avatar bjornmose avatar brechtvl avatar cwant avatar ddunbar avatar dfelinto avatar dingto avatar elubie avatar gaiaclary avatar howardt avatar ideasman42 avatar jannekarhu avatar jesterking avatar joeedh avatar johnnygizmo avatar kjym3 avatar lukas-toenne avatar lukastoenne avatar maximecurioni avatar moguri avatar nexyon avatar nicholasbishop avatar psy-fidelious avatar schlaile avatar sergeyvfx avatar significant-bit avatar tonroosendaal avatar willianpgermano avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

blensor's Issues

Blender 2.8

Has anyone tried to get this to work with blender 2.8? I am having some issues getting this running. I am unable to install the supporting packages.

E: Unable to locate package libopenjpeg2
E: Package 'libpng12-0' has no installation candidate
E: Unable to locate package libboost-regex.62.0
E: Couldn't find any package by glob 'libboost-regex.62.0'
E: Couldn't find any package by regex 'libboost-regex.62.0'

Importing a custom model fails with Kinect sensor

Hi Michael,
when I import a model from the google 3d warehouse (converting it to obj with sketchup), the Kinect sensor fails to create a valid pcd file.
Other sensors seem to work well.
as for Kinect, the output is:

nan nan nan 2.350988561514729e-38 -1

for every data point.
When I try to create a pgm file from the scan, I dont get any idea what to do with the content. There is no header explaining the format as far as I can see in a hex viewer. Thus, it cant be a valid pgm file, I think...

Maybe I missed some settings in the kinect sensor?
Maybe I have some misconfigurtion in the sensor range setting, which lead to overflow in larger areas?

However, the label of the point (-1) indicates, that no model has been found??

If you like, you can see the contents of the blender project file here:
http://www71.zippyshare.com/v/13984718/file.html

Thank you very much for any hint!

Blensor crash when starting

Hey,

I just compiled from source blensor, and when i try to start it crah.

Writing: /tmp/blender.crash.txt

Blender 2.74 (sub 1), Unknown revision

backtrace

./blender(BLI_system_backtrace+0x1d) [0x1112b5d]
./blender() [0x83ae93]
/lib/x86_64-linux-gnu/libc.so.6(+0x36d40) [0x7f62cb76bd40]
./blender() [0xa1a5d6]
./blender(ui_draw_but+0x408) [0xa1ed28]
./blender(UI_block_draw+0x1aa) [0x9dac8a]
./blender(ED_region_header+0x1a8) [0xad3ca8]
./blender(ED_region_do_draw+0xa55) [0xad2875]
./blender(wm_draw_update+0x58c) [0x84096c]
./blender(WM_main+0x28) [0x83cea8]
./blender(main+0xd6f) [0x82e50f]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5) [0x7f62cb756ec5]
./blender() [0x83a8f7]

any idea?

Scan attribute

Hi Michael,

I'm looking forward to applying your Blensor functionality to some of my school studies, it's the perfect tool I need for some case work. I just pulled your addon into Blender 2.79 but am getting an error I'm having trouble working around right off the bat:

File "~/blensor/init.py", line 507, in invoke dispatch_scan(obj)
AbsoluteError: 'object' object has no attribute 'scan'

Any suggestions? I looked around but didn't see any similar issue, apologies if it's been addressed previously.

Thanks

Compilation Error

Hi all,

I'm trying to build blensor from source on 12.10, and I run into the following compiler error about 99% of the way into the build process:

In file included from /home/sdmiller/software/blensor/intern/cycles/blender/blender_sync.h:25:0,
from /home/sdmiller/software/blensor/intern/cycles/blender/blender_python.cpp:23:
/home/sdmiller/software/build_linux/source/blender/makesrna/intern/RNA_blender_cpp.h: In member function ‘void BL::Image::zbuf(BL::Context, int_, float_, BL::Scene)’:
/home/sdmiller/software/build_linux/source/blender/makesrna/intern/RNA_blender_cpp.h:37040:127: error: cannot convert ‘float_’ to ‘float__’ for argument ‘5’ to ‘void Image_zbuf(Image_, bContext_, ReportList_, int_, float__, Scene_)’

(And other similar ones, under intern/cycles)

Any advice?

Thanks!
-Stephen

Scan Distance does not work?

Hello, I am trying to create a point cloud with a TOF Camera. This works fine until I want to scan larger objects. I do not get these scanned completely.

I found the option Scan_Distance (bpy.context.object.tof_max_dist ). Here you should be able to set the visible distance of the sensor. Although an input up to 1000.0 is possible, it seems to me that the scans are limited to a distance of about 100.

Scan_Distance
Normally, points should also appear on the plane

Did anyone of you have the same problem and can help me? Do any additional settings need to be made for this?

Thanks for your help

Simulating Reflectivity

Hello,

How would one go about simulating reflectivity?

Currently I convert RGB to a gray-scale value, and use that in lieu of reflectivity :

I = 0.2989 * R + 0.5870 * B + 0.1140 *G

But this is really inadequate for my purposes. Also, it's not very clear how or whether refraction is modeled as well as phenomenon such as Fresnel effect.

Is it possible for blensor_intern.scan() to also provide the surface normal at the ray/object intersection point?

Thanks !!

Galto

PCD export example

Hello,
I ran a pcd export example but found an issue on the output file.
The output file said POINTS 24344 but the actual data auner DATA ascii has only 127 number of data.

Also when I run it with my own obj file, the numbers are not matching.

Inconsistent range reading

Hi, I get inconsistent range reading using Kinect sensor. The scan appears to be placed ahead of the object. Can you please advise what may be going on?

error_blensor

These are my Sensor Simulation parameters.
error_sensor_simulation

And my Lens parameters
error_lens

I can provide more information if needed.

GPU acceleration + compatibility with latest version of blender

Hello, I noticed that blensor as it is has no GPU acceleration. Does the software as it's written now have any potential to add an acceleration aspect to it and/or are there any newer blender based lidar scanners with GPU acceleration? Also is it feasible to update blensor to make it compatible with blender 3?

Depth map image crashes

Hi, Thanks for sharing the blender add-on. I downloaded the Blensor 1.0.18 RC 10 64-bit AppImage. I used the default loading scene to output a depthmap by selecting the scanner typer as depthmap. However, the output pgm file was about 16mb and cannot be opened. Do you have any suggestion on how to proceed with a depth map scanning and output a pgm file

Thanks

On which specific version of ubuntu can this be built?

I am using newest ubuntu 20.04 but certain libraries/dependencies you specified are out of date. And I tried ubuntu 16.04 to 18.04 in docker but met with the same problems (the specific libraries which are problematic are different in each version). Can you share the specific ubuntu version on which this can be built?

It would be better if you can provide a docker file/image (although I am not sure if works in docker).

Cannot start blensor

This is on Ubuntu 18.04 while running Blensor 1.0.18 RC 8 64-bit Ubuntu 17.04+.

The first problem was that numpy was not linked properly, but I fixed that one.

The second problem is the depenency on these 3 libraries:

sudo apt-get install libopenjpeg2 libjpeg62 libpng12-0

apt-get can't find 2 of these libraries, but again I installed them through packages.ubuntu.com

  • It gave 2 more errors for dependencies:
    • libopenimageio
    • libjemalloc

I guess this is just a warning that he newest build does not work out of the box on Ubuntu 18.04.

scan API restructure

Problem

There are a lot of parameters to the scan functions scan_advanced and scan_range. This makes code harder to read and maintain and because each parameter has a default value led to some bugs where parameters were accidentally missing.

An example function call is:

blensor.blendodyne.scan_advanced( angle_resolution=obj.velodyne_angle_resolution, 
            max_distance=obj.velodyne_max_dist, start_angle=obj.velodyne_start_angle, 
            end_angle=obj.velodyne_end_angle, noise_mu = obj.velodyne_noise_mu, 
            noise_sigma=obj.velodyne_noise_sigma, add_blender_mesh=obj.add_scan_mesh, 
            add_noisy_blender_mesh=obj.add_noise_scan_mesh, 
            rotation_speed = obj.velodyne_rotation_speed, evd_file=filename,
            world_transformation = world_transformation )

Possible Solution

As can be seen above, most of the parameters are from the sensor object obj. Why not just pass this as a parameter instead? The scan_advanced interface would then become something like

def scan_advanced(sensor, evd_file, world_transform):

Potential Problems

I'm guessing that the reason for the explicit parameters was that so that scan_advanced could be run without having a sensor object. A workaround for this in the proposed interface would be wrapping parameters in a dummy class.

class Namespace(object):
    pass  

sensor = Namespace()
sensor.velodyne_angle_resolution = 0.1
sensor.max_distance = 120
...
scan_advanced(sensor, 'test.evd', Matrix())

Other than testing, for which the dummy class is probably ok, I can't think of any use cases that require the parameters to be separated as they are.

Conclusion

If think that the parameters should be left as they are, feel free to close this issue. If, however, you're happy with the proposal, I'll go ahead and do it, and submit the patch as a pull request.

File format

I can't understand the property object id. I add some cubes in the scene, but only get one object id in the pcd file after i export the data. I also try to export *.numpy, but the values of object id are the same.

error: can't allocate region

While scanning a scene, following message occurs with Blender quitting:

Do Blensor processing: 465
blender(400,0xa19571a8) malloc: *** mach_vm_map(size=193413120) failed (error code=3)
*** error: can't allocate region
*** set a breakpoint in malloc_error_break to debug
Calloc returns null: len=193410720 in Render ray faces, total 1830828084
Bus error: 10

MacOS X 8GB RAM

Blensor pcd output laterally inverted while projecting

Issue Description:

I tried to simulate the Kinect camera using the Blensor application (from the binary release). I used the default object provided, applied an image texture on it, did some animation and rendered it into pcd files.

I want to run some computer vision algorithm on it. I load the pcd files using C++ (with PCL library). I am interested in how the pointcloud looks on the image plane. I do a perspective projection of the pointcloud using default Kinect camera parameters.

Here is when I notice that the projected image is laterally inverted, when compared with the pointcloud.


Here are the details:

Here are the screen grabs of the settings that I use in the Blender GUI:

world
How the workspace looks like.

camera
Camera Settings.

trnsfrm
Transformation Settings.

kinect
Settings of the Kinect Camera before scanning/rendering.

You can find a sample of the generated pointcloud in this link: Sample Pointcloud

Here's a screen shot of the PCD file when viewed in pcl_viewer (with the axes visualized):

pointcloud_screen
The pointcloud in pcl viewer. This is how it should actually look like.

And here is the same pointcloud, projected in the image plane:

test_pcd_n
Projection of the pointcloud, in grayscale.

If you observe closely, you will notice that the image and the pointcloud are aligned oppositely in the horizontal direction.

I have not been able to understand the cause of the problem. Is Blensor dumping the X axis of the scan in the opposite direction? What is the problem and how can this be fixed?


Additional details:

If it helps, here is the code I use to project the pointcloud:

  ` pcl::PointCloud<pcl::PointXYZRGB>::iterator b1;
    std::vector<cv::Point3f> objectPoints;
    cv::Mat A(3,3,cv::DataType<double>::type);  //projection matrix
    std::vector<cv::Point2f> projectedPoints;
    std::vector<float> data;
    int i = 0;

    for (b1 = cloud->points.begin(); b1 < cloud->points.end(); b1++, i++)
    {
        if(pcl_isfinite(b1->x) && pcl_isfinite(b1->y) && pcl_isfinite(b1->z))
        {
            cv::Point3d p(b1->x,b1->y,b1->z);
            data.push_back((b1->r+b1->g+b1->b)/3);
            objectPoints.push_back(p);
        }
    }

    A.at<double>(0, 0) = cam_depth.get_px();  A.at<double>(0, 1)  = 0;                                        A.at<double>(0, 2)  = cam_depth.get_u0();
    A.at<double>(1, 0)  = 0;                                        A.at<double>(1, 1)  = cam_depth.get_py(); A.at<double>(1, 2)  = cam_depth.get_v0();
    A.at<double>(2, 0)  = 0;                                        A.at<double>(2, 1)  = 0;                                        A.at<double>(2, 2)  = 1;

    cv::Mat distCoeffs(4,1,cv::DataType<double>::type);  //distortionless projection
    distCoeffs.at<double>(0) = 0;
    distCoeffs.at<double>(1) = 0;
    distCoeffs.at<double>(2) = 0;
    distCoeffs.at<double>(3) = 0;

    cv::projectPoints(objectPoints, cv::Mat::eye(3, 3, CV_64F), cv::Mat::zeros(3,1,CV_64F), A, distCoeffs, projectedPoints);

    std::vector<cv::Point3f>::iterator it1 = objectPoints.begin() ;
    std::vector<float>::iterator it2 = data.begin() ;

    for (std::vector<cv::Point2f>::iterator it = projectedPoints.begin(); it != projectedPoints.end(); (++it) )
    {

        int x_ = round(it.base()->x);
        int y_ = round(it.base()->y);
        if((x_> 0) && (y_ > 0)&&(x_<width) && (y_ < height))
        {
            float intensity = 0.0f;
            intensity = *it2;

            color_image[y_][x_].R = intensity;  //color_image is a matrix that holda RGB values
            color_image[y_][x_].G = intensity;
            color_image[y_][x_].B = intensity;
        }
        it1++; it2++;
    }
    
    return color_image; `

This projection code works fine with other pointclouds, including the ones dumped from real Microsoft Kinect and Intel RealSense.

Possibilty to disable automatic file numbering

When using writePCLFile() in blensor/evd.py the pcd filename gets numbered in this way:

pcl = open("%s%05d.pcd"%(self.filename,frame_counter),"w")

It would be good if you allow the user to control or disable the numbering through parameters.

Blender 2.9

Hi!

Can I use blensor in blender 2.9 or 2.8? I wrote a add-on but can not install it in 2.7. I want use my add-on and your blensor in the same blender version. It there any possibility to build blensor using blender 2.9?
I have seen in #36 that you said you are working on it.

Thanks!

Adding VLP-16 to blendodyne

Hello

I'd like to add a VLP-16 to blendodyne. I understand I can use "generic LIDAR" to simulate a VLP-16, but I have been looking at the code and I think it would be a good exercise for myself (I am a C++ programmer just learning up on Python and bpy) to add it as another sensor supported by blendodyne.

Can you perhaps please give me a high level layout on what files and what portions I need to edit for adding the VLP-16 to blendodyne and perhaps some relevant pointers/advice/caveats, etc... if applicable?

Thank you in advance and also thank you for this great tool!

Galto

TOF sensor has some limit distance

Hello, I use the TOF sensor, and have a question.
It does not scan from a certain distance regardless of the scan distance.
Although the scan distance is set to 20m, it scans in a circle shape at 5.9m. How can you solve it?
1
2

Unable to install Blensor

Hello,

I am working on Ubuntu 18.04 64-bit but I can't manage to install/ launch Blensor:
I've followed the installation instructions : installed the 3 required (although I coudn't do it directly in the terminal), downloaded the last release (RC10) but I still can not launch the blender file.

Unlike in Blender normal 2.79 distribution, where the blender file is known as an executable, in Blensor it is listed as a shared file.
When I try to open it in the terminal with "./blender" I get the error message: "./blender: error while loading shared libraries: libboost_filesystem.so.1.62.0: cannot open shared object file: No such file or directory" and I am at least sure that the new version libboost-filesystem1.65.1 is up to date.

Is there a specific package I need to install to read this shared file or maybe update something in the code to refer to the new version of libboost-filesystem ?

Thank you in advance for your help :)

Bug in Perlin Noise at small scales

So if you use a small scale with Perlin Noise (as the kinect sensor does), the indexing fails here.

This conversion to int8 should be changed to int32 to avoid going out of range at small scales. In such cases (again, as with the kinect) the n parameter needs to be set higher as well (e.g. 512).

Question: Simulation over animated scenes

When simulating over an animated scene, i.e., several frames, is it possible to simulate only the part of the survey which belongs to that frame?
Like if I have a static scanner with a rotating head, then my scanner would see a different part and a different state of the dynamic object at each point in time, starting at frame 1 and than maybe after 20° of rotating it sees the scene at frame 10

Or is the entire scan simulation repeated over the scene at different frames in the frame range mode?

How are the scan labels generated in output pcd file?

Hi!
I'm following PCL Tutorial, loading and scanning a scene.
In the .pcd file, produced with General Lidar sensor, there is label field:

When I visualize unique labels with color, I get this:

Which is different from color information, where, for example, all cups have different color, and looks like segmentation.
Could you please tell me how were these lables generated?

Multi-core processing

When running 'scan range' blensor doesn't seem to be taking advantage of multi-core. As each scan is fully independent of the next it would make sense that this should be possible. Is there something I am missing here?

If not, do you have any advice to parrellising in a way where the .blend file does not need to be loaded in multiple times?

Thanks!

Question: kinect scans ?

Hi,
I'm trying to use BlenSor tool to collect scans of a mesh using kinect. The mesh is loaded to a .blend file . I'm using BlenSor on Ubuntu 14.04 64bit.
I modified the python script example provided on BlenSor website and it includes the following:

import bpy
from bpy import data as D
from bpy import context as C
from mathutils import *
from math import *
import numpy as np
import blensor

scanner = bpy.data.objects["Camera"]
scanner.location = (-2,-3,-7)
scanner.rotation_euler = (2.4674, -0.371874, -0.985201)
blensor.kinect.scan_advanced(scanner, evd_file="/home/randa/Desktop/scans/scan1.pcd")

when I run the python script in Blender or in command line it returns the following error:
File "/home/randa/Desktop/Blensortest/blensor/2.74/scripts/addons/blensor/kinect.py", line 295, in scan_advanced
all_quantized_disparities[projector_idx] = disparity_quantized
IndexError: index 3200818124 is out of bounds for axis 0 with size 307200

The location and orientation of the camera are visualized and accordingly the script should for sure return a scan. When I slightly modified the location to (-2,-3,-6), it returned a scan.
so what could be the problem?Is there something missing in the parameters of scan_advanced??

Question about Sensor Coordinates in python script

Hello everyone,
I am currently testing blensor's LIDAR sensor. I did a script to scan a scenario and then generate the .pcd files. However, I noticed that the point cloud data, when imported back to blender or PCL, is rotated and translated. I also noticed that in the blensor GUI there is the option of save or not the sensors coordinates when performing the scan. Is there any option I could set to do the same?
I already tested the following and it did not work.

scanner = bpy.data.objects["Camera"]
scanner.local_coordinates = False
blensor.blendodyne.scan_advanced(scanner, rotation_speed = 10.0,simulation_fps=24, angle_resolution = 0.1728, max_distance = 120, evd_file= "./scan.pcd",
                                noise_mu=0.0, noise_sigma=0.03, start_angle = 0.0, end_angle = 360.0, evd_last_scan=False, add_blender_mesh = False,
                                add_noisy_blender_mesh = False)

Memory leak

The newest version of the code has a memory leak somewhere, I just generated 250 scans in a sequence and now blensor is taking up 2.5G of RAM. I'll try to have a closer look so I can file a more useful bug report - but for now I thought I'd just raise the fact that it's there somewhere.

TypeError in blensor.tof.scan_advanced

Issue description:

Hi, I get TypeError: unsupported operand type(s) for <<: 'tuple' and 'int' when I run blensor.tof.scan_advanced from script in Blensor 1.0.18 RC 10 Windows.
Here is a simple script producing the error:

import bpy
import blensor

camera = bpy.data.objects['Camera']

camera.select = True
blensor.tof.scan_advanced(camera, evd_file='cube.pcd')  # TypeError
camera.select = False

and I get:

Traceback (most recent call last):
  File "Blensor-1.0.18-Blender-2.79-Winx64\2.79\scripts\addons\blensor\evd.py", line 225, in writePCLFile
    self.write_point(pcl, pcl_noisy, INVALID_POINT, self.output_labels)
  File "Blensor-1.0.18-Blender-2.79-Winx64\2.79\scripts\addons\blensor\evd.py", line 190, in write_point
    color_uint32 = (e[12]<<16) | (e[13]<<8) | (e[14])
TypeError: unsupported operand type(s) for <<: 'tuple' and 'int'

Solution:

The following quick fix solved the issue:
modifying the line 42-44 in evd.py

INVALID_POINT = [0.0, 0.0, 0.0, float('NaN'), float('NaN'),
                 float('NaN'),float('NaN'),float('NaN'),float('NaN'),
                 float('NaN'),float('NaN'),-1,(0,0,0),-1]

to

INVALID_POINT = [0.0, 0.0, 0.0, float('NaN'), float('NaN'),
                 float('NaN'),float('NaN'),float('NaN'),float('NaN'),
                 float('NaN'),float('NaN'),-1,0,0,0,-1]

snapshot

Hope this helps!

depthmap bug: The small sensor_size will cause depthmap results error

TASK DESCRIPTION
System Information
Operating system: Windows 10
Graphics card: RTX 2080 SUPER

Blender VersionF12968783: bug.blend
Worked: Blensor 1.0.18 RC 10 Windows

Short description of error
When the sensor width is set small, the result of the depthmap is incorrect (the scale of scan result is not consistent with the object).

Exact steps for others to reproduce the error
Based on the default startup, we set the camera focal length to 35mm, set sensor width to 5.632mm, and the scanner type is Depthmap. When the camera's location is (0, -20, 0), we press the "Single scan" button, and the scan result is incorrect.

Why HDL-64E2 lose some area when scanning

Hi~ blensor is very impressive!

But I find some problems: when I use HDL-64E2, it seems to lose some area (the top of the car in the picture)
image

when using HDL-32, it seems right.
image

Did I mistake some settings? Can you help me with this? Thank you!

Problem with installation

Hello there,

I just have a problem with installing Blensor. I have tried all the version on the list of the website, however, I can not find even Sensor simulation tool in the addons - system tab. Could you help me accessing to it? I tried both Window and Mac os but both didn't work.

Many thanks

blensor scans API

Thank you for sharing this software!
In the blensor.org,there is a sentence "Every scanner has a scan_advanced function, the parameters vary and have to be looked up in corresponding source file."
Where i can find this scan_advanced function?It seems that these functions are not mentioned in the blenser documentation
By the way,where i can find the corresponding source file mentioned above?
Thank you~

cant link to libboost:locale

Hi, on compiling blensor I receive this error:

Linking CXX executable ../../bin/blender
../../lib/libbf_intern_locale.a(boost_locale_wrapper.cpp.o): In function `bl_locale_init':
boost_locale_wrapper.cpp:(.text+0x1e): undefined reference to `boost::locale::localization_backend_manager::global()'
...
boost_locale_wrapper.cpp:(.text+0x212): undefined reference to `boost::locale::generator::generator()'

However, both packages libboost-locale1.53-dev (boost 1.53) are installed properly.

Is there a compatibility problem with 1.53?

Thanks alot!

Is it possible to isolate blensor from blender?

I'm trying to build blensor on top of an existing Blender repository. Is there a way that you would recommend doing this? My impression is that the files for blensor are fairly disjoint from the ones necessary for blender, so it seems like it should be possible.

Thank you for the great tool!

PGM export flipped

When export the scan to pgm file format, the result depth map is flipped up side down.

By the way, is there any way to export pgm by binary rather than ascii? I found photoshop cannot import the ascii pgm format correctly.

wrong focal length

When using Blensor 1.0.16rc1 the focal length for a TOF sensor which is set in the GUI is not correctly used. It is hard coded to 10.0 in TOF.py:
#10.0mm is currently the distance between the focal point and the sensor

sensor_width = 2 * math.tan(deg2rad(lens_angle_w/2.0)) * 10.0
sensor_height = 2 * math.tan(deg2rad(lens_angle_h/2.0)) * 10.0

replacing 10.0 with flength fixed this problem for me.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.