jonperdomo / openmht Goto Github PK
View Code? Open in Web Editor NEWPython module for multiple hypothesis tracking.
License: GNU General Public License v3.0
Python module for multiple hypothesis tracking.
License: GNU General Public License v3.0
I run this code with only 5 detections a frame and the N-scan number is 3, and it takes a really long time at frame 3, in which it keeps running the bron_kerbosch() function.
I am not sure if there is a better way for this part or is that because I am not running this codes properly?
And how many detections could the algorithm handle at most when you test?
Really appreciate for your reply.
Why the mht algorithm works up to the 99th frame? I have somenthing like 1 thousand frames how can I process all of them?
I found a bug in the plot_tracks routine. I attached a zip of the corrected version with some additions.
The last track was not plotted.
It is unfortunate that there is no ParameterFile.txt for the sampledata. For new users, looking to quickly try out the library it needs to be easy to run the sample or they might quickly leave again and use another library instead. Can you please provide a ParameterFile.txt file for the sampledata so that one might try out the application.
Thank you
v_ordered = set()
degrees = list(enumerate(self.vertex_degrees(g)))
while degrees:
min_index, min_value = min(degrees, key=operator.itemgetter(1))
v_ordered.add(min_index)
degrees.remove((min_index, min_value))
Hello here I see you made a sort for degrees,but you put the min_index into the set. We can't control the order in the set.
Does this destroy the previous sorting?
Hi, I have tried this code on my dataset which has 100 frames. The maximum observations is 4. I noticed that the calculating MWIS procedure becames really slow after frame 49. Here are the parameters I set:
image_area = 422288 # Image width x height in pixels
gating_area = 1 # Gating area for new detections
k = 0 # Gain or blending factor
q = 0.00001 # Kalman filter process variance
r = 0.01 # Estimate of measurement variance
n = 1 # N-scan pruning parameter
Would you suggest me change any parameters to make it run faster? Thank you so much!
The number of misses for a track is counted with the self.__nmiss attribute, but its initial value is defined as nmiss which is the maximum number of consecutive misses allowed for a track before it is deleted. But the nmiss is not used to check the max misses and the condition is hard-coded to 3 instead of using nmiss (line 38 of kalman_filter.py):
if self.__nmiss > 3:
return False
If we do this, then when we create a new Kalman track at frame k-1 and then assume a missed detection at frame k, self.__nmiss becomes 4 instead of 1 and hence, the branch gets pruned.
I think we should maintain two separate attributes, self.__nmiss and self.__nmiss_max, and initialize them to 0 and nmiss respectively and modify the condition to:
if self.__nmiss > self.__nmiss_max:
return False
Is it possible to adapt the csv or the algorithm to GPS data? and what about "v" parameter with these kind of data?
I want to pass to the algorithm gps positions insted of pixels. The idea is to ave a map and detect points on it.
in graph.py the set edges function doesn't update the graph_dict. I think this is why it takes so long to compute the MWIS since set_edges is used in the global_hypothesis function.
Line 37 in 6414359
Thus, when we get the complement, the current edges will not be removed as there is nothing in the graph_dict when self.__edges is called.
Line 104 in 6414359
This means the degeneracy ordering will be large and there will be a lot of recursive calls.
To fix this, I think the following lines:
Lines 84 to 85 in 6414359
Just need to be corrected to this
def set_edges(self, edges):
for edge in edges:
self.add_edge(edge)
Does this make sense or have I overlooked something?
In the repository, the probability of detection P_D is taken as 1/V. The default value for V is set as 307200; hence, the value for the missed detection score becomes close to 0 (-3.255213631532517e-06) as 1/V is very small. This happens in line 20 of kalman_filter.py:
self.__missed_detection_score = np.log(1. - (1. / self.__image_area))
However, the paper mentions that they have taken P_D as 0.9, without much detail as to why they used this value:
However, the probability of detection should be high. If we take P_D as 1/V, the value of P_D becomes very small as compared to 0.9. Should we assume P_D and 1/V to have the same values? If not, how do we define the value of P_D?
Hello, the project you have done is great and i am trying to follow you project to learn MHT algorithm. However, i am struggling in the meaning of parameters u and v, are them meaning the velocity and accelerated velocity?
Hello, first thanks to your good module for tracking
Can I know the meaning of u and v in the input file?
In the MOTA challenge file, the information of each box is represented as 4 variables(left-upper box coordinate, width and height of box). However, in the example file each box is indicated as 2 variables, u and v.
Can I know the meaning of u and v in the input file?
Thanks.
This tool can be easily extended to support N-dimensional coordinates. Currently only 2D coordinates are supported.
Hi,
Just trying to understand what is meant by the 'frame'. I assume it means successive points in time where measurements are made, but I'm curious to know why they are defined as integers, and why time is not explicitly included. From the Kalman filter scripts, it doesn't look like any time step is included in the prediction, which seems a little limiting.
Danke,
BillyPeanut
I think the branches_added variable initialization should be done before the for loop over detections instead of after as done currently:
openmht/mht.py
for index, detection in enumerate(detections):
branches_added = 0 # Number of branches added to the track tree at this frame
detection_id = str(index)
This causes the loop to only count branches added for the last detection. Instead, we should do the initiliazation before the loop:
branches_added = 0 # Number of branches added to the track tree at this frame
for index, detection in enumerate(detections):
detection_id = str(index)
in WeightedGraph.mwis, max_weight is initialized min(self.__weights.values()), that would cause problem when all branches have the same score or there is only one branch. In that situation, no branch will be picked as solution. Setting it min(self.__weights.values())-1 or any value smaller than that will solve it.
Hello,
I just wondering where can I set Bth, which is the number of branches in a track tree threshold as you mentioned in your paper?
Besides, would you mind sharing your detection result on the MOT dataset? I could not run on my own data after 10 frames.
Thanks and Best Regards.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.