Giter Club home page Giter Club logo

defdap's Introduction

DefDAP

A python library for correlating EBSD and HRDIC data.

PyPI version Supported python versions License Documentation status Binder DOI

How to installHow to useDocumentationCreditsContributingLicense

How to install

  • DefDAP can be installed from PyPI using the command pip install defdap
  • If you use conda as your package manager (https://www.anaconda.com/), the prerequisite packages that are available from anaconda can be installed using the command conda install scipy numpy matplotlib scikit-image pandas networkx jupyter ipython (Anaconda3-2020.02 has been tested)
  • For more information, see the documentation

How to use

  • To start the example notebook, use the command jupyter lab and click the notebook (.ipynb file)
  • Try out the example notebook using the binder link above (Note: change the matplotlib plotting mode to inline, some of the interactive features will not work)
  • Here is a video demonstrating the basic functionality of the package: DefDAP demo

Documentation

Contributing

  • For information about contributing see the guidelines
  • Any new functions or changes to arguments should be documented.

Credits

The software uses the following open source packages:

License

Apache 2.0 license

defdap's People

Contributors

allanharte avatar dtfullwood avatar jqfonseca avatar merrygoat avatar mikesmic avatar rhysgt avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

defdap's Issues

Indentation of ebsd.py

Just pulled devel and the sample notebook wont run.

I think lines 239 to 248 of ebsd.py should be indented one tab stop?

Add testing template

Add an example template to show how pytest might be used. Include examples of fixtures.

Remove scikit-learn dependency

Find an alternative to MeanShift from scikit-learn package as it's quite large. Will probably write something myself.

Then take out dependency from readme and setup.py

What is the number of slip bands within a grain

The cubic slip trace predictions work well in my Ni superalloys. I'd like a measure of the number of slip traces within a grain. Any ideas about how to implement this? I would settle for the total length of slip traces as opposed to their number.

Currently we use the radon transform to get the grain band angle distribution between 0-180 degrees and then we use peakutils to automate the identification of the peaks to spit out the angles. We could get the peakutils function to give us the 'intensity' of those peaks, which is proportional to the amount of slip on that slip system, but I don't know if this value is relevant when comparing the amount of slip occurring between different grains. Ideally I would like a length. Ideas welcome!

Error in plotRDR

Hi everyone,

I am using the GrainInspector to analyze the slip traces observed in some sample Ni superalloy HRDIC maps provided by the authors in Zenodo. I am trying to use the RDR calculation after hitting the "Run RDR only on group 0" button, but I am getting an error stating that "AttributeError: 'Map' object has no attribute 'slipSystems'". I have tried to add the slipSystems variable to the Phase object as static, but then I got the same error on line 540 of the inspector.py script.

Any clue of what can be wrong? I can provide screenshots of my errors if necessary.

Thanks a lot in advance!!

Eugenia

How to set two slip systems for each phase

Cheers,

Could you clarify how to set each slip system for each phase? For example, I have a dataset with two-phase material and want to set a 'hexagonal_withca' slip system for the first phase and 'cubic_bcc' for the second one in order to analyze both.

Best,
Eugene

Error when using grain inspector for cubic materials

Screenshot 2021-04-15 at 14 52 35

After installing the latest version DefDAP, when running grain inspector for cubic materials I get a "name 'quatToPlot' is not defined" error. I have just tried again on an older version and works perfectly.

API proposal/discussion: make hrdic.Map.__getitem__ work like a dictionary instead of a list

Currently, the indices to get grains out of hrdic.Map are off by one relative to the integer IDs in the hrdic.Map.grains image. That is, hrdic.Map[0] gets the grain that has value 1 in the image, hrdic.Map[1] gets the grain with value 2, and so on. (And this might be incorrect if the grains array has non-contiguous labels, not sure.) This is confusing! I understand that this is perhaps inherited from or inspired by skimage.measure.regionprops, but over there we regret that API choice and will probably move to a dictionary-like API with the label as the key in a later version.

Would there be interest in taking the same approach here?

Retrieving raw image for each grain

How difficult would it be to link the the raw image to the grain segmented via EBSD? For example, we might like to see how the slip interacts with features in the raw image (particles, second phases etc.)
Claudius and I could work on this but it would be good to know how hard you think it would be.

AZtecCrystal data export and Grain boundary fitting issues

We have encountered the issue with data import into the DefDAP while our data are exported from AZtecCrystal v2.1 and return an error message (please below). When we use the same EBSD data exported from AZtec v5.1 it works well. We use the AZtecCrystal to crop the initial EBSD map just to the area where the DIC image acquisition was performed. What can be done to avoid the error message? Is it necessary to crop the initial EBSD data which are covering a significantly large area than DIC?
errors_load_ebsd

The second issue is related to the grain boundary fitting to a DIC map. After we select homologous points, the grain boundaries do not fit well on the DIC map (see below). Only the grain boundary fragments are shown in the final image. What can be the reason since we provided high-quality EBSD maps with a minimum of zero solution points? It should be also noted that obtained transformation matrix after DefDAP resolved homologous points between the EBSD map and SEM image, is significantly smaller than the reference and shifted from the central position of the plot. Could this be the reason for not completing the grain boundary overlay on the DIC map? Or can some issues originate from the SEM (pattern) image (we use a SE-detector image instead of a BSE one)?
Boundaries_10pct
boundaries_3 5pct

More general questions:

  • Does the program work reliably for larger strains such as 7pct and higher? I have seen the articles where DefDap was utilized and usually the tests were conducted at a lower strain level compared to our studies.
  • Is there some margin of tolerance in DIC pixel size and EBSD step size? Do they have to be approximately equal?

Conda environment file

Could we create a conda environment file that can run the current version? We have a lot of dependencies and it would be nice to install them all in one go. There might also be a better way of doing this.

Rotated slip traces

Hi all!!

I'm using DefDAP for analyzing slip traces in my Ti sample. I know in advance that all my traces correspond to prismatic slip. However, when I read my EBSD file with DefDAP, all the predicted traces were rotated -90 degrees, so I cannot find any match. Any idea of what can be happening or how to fix it? I'm using the develop branch of the code!

Thanks a lot in advance!

Eugenia

Interactive buttons for deciding slip character

Hello python people...

I would like a function to decide between types of slip character in individual grains. Slip character can be planar, wavy or diffuse. I cannot envisage a way of automating the code to make this distinction (other than using machine learning... which is something that we may want to discuss by the way!) and so I need a function that pops up several buttons that I can click to decide on the slip character myself.

In the DataAnalysisUtilities package there is a function that links grains from one EBSD map to another EBSD map (e.g. maps of the same area but pre- and pos-deformation). This uses a "link" button, but I do not need to do anything as complicated as linking two data sets.

Does anyone know of a simple interactive function for a button that I could use?

Thanks, A

IPF map issue for HCP

I get the following error when using plotIPFmap

could not broadcast input array from shape (1366115) into shape (1361970)

Timeseries

I wrote a bit of code a while ago, to deal with DIC data from multiple deformation steps.

The way I implemented it was to have a different class (called Maps as opposed to Map), that you initialised by giving a path and file names to the DIC data. You can then do things like link an EBSD map, or set homog points, scales, crops etc for all the DIC maps in one go. The key thing I used it for was plotting a DIC map with a slider at the bottom to flip between steps. Additionally, I also made it so that you can call Maps[1] to give the corresponding Map.

It will need some improvement before adding into the repo though, but wanted to see if this would be useful for anyone.

Assumption of loadVector

calcAverageSchmidFactor and grain equivalent assumes loadVector=[0,0,1] - this isn't ideal behaviour and might catch some people out. Would be better to raise an Exception to tell the user to define loadVector based on their sample.

Change the way EBSD data is linked to a DIC map

Currently, a flood-fill algorithm is used to detect grains (defined from an EBSD map) into the DIC map.

An alternative way of doing this, is warping the EBSD grain map directly using the affine (or polynomial etc) transform. Attached is an example of the EBSD grain list, current DIC grain list and my proposed method. It has a few advantages

  • Very fast (pretty much instantaneous)
  • Deals with non-indexed points better
  • EBSD and DIC grains have the same ID number

Since this would be quite a big change, would be a good to get opinions on it and test it before implementing it.

image (1)

Loading EDAX EBSD data

I gather I can load either cpr+crc file or a ctf file from Oxford / Aztec. It looks like there is also an option for a python-based database.
What are people regularly doing to load EDAX type data? (.ang files)
I can probably convert from ang to ctf format fairly easily.
Would it be easier to go to the python database option? If so, do you have an example of a format?
Thanks
David

This is really great. I have some questions.

  • Why does it take so long to calculate?
  • Once calculated once is there a way to cache this, maybe pickle and save to file?
  • Is the EBSD map from the starting or final configuration?
  • Slip trace analysis is pretty off for most of the grains. Why is that?
  • Is this distortion you see typical? Does it vary from sample to sample?

Matching DIC and EBSD after finding grains in a DIC map

The process of matching grains in a DIC to EBSD map needs to be made more robust or some error handling added so the process does not completely fail if match is not found. A typical error that can occur (noted by @dtfullwood, thank you):


IndexError Traceback (most recent call last)
in ()
66 ### Detect grains in the DIC map
67
---> 68 DicMap.findGrains(minGrainSize=100)
69 DicMap.buildNeighbourNetwork()
70 # **************************************************

~/Documents/GitHub/DefDAP/defdap/hrdic.py in findGrains(self, minGrainSize)
743 modeId, _ = mode(self.ebsdMap.grains[warpedDicGrains == i + 1])
744
--> 745 self.ebsdGrainIds.append(modeId[0] - 1)
746 self.grainList[i].ebsdGrainId = modeId[0] - 1
747 self.grainList[i].ebsdGrain = self.ebsdMap.grainList[modeId[0] - 1]

IndexError: index 0 is out of bounds for axis 0 with size 0

KAM: nearest neighbour, second nearest? etc...

Hi All

Am I correct in thinking that the current calcKam() function in ebsd.py takes the average of neighbouring pixels in the row, in the column, and then takes the average of those row and column values? Is this strictly a kernel? i.e. are the corner pixels of a 3x3 square kernel taken into account?

If we were to think about a 5x5 kernel or a 7x7, etc., would we have to do the calculation in a different way? There are scipy functions to convolve a 2D dataset with a kernel, such as scipy.signal.convolve2d().

@mikesmic maybe you think that we can adapt the current calcKam() function, or do you think that I should create a new function using something like convolve2d that is flexible for an nxn kernel?

Thanks, Allan

Improving file io

In preparation for use with the beta reconstruction project I would like to work on some enhancements to the file io in defdap.

As a first step I have refactored the tests to make them a lot more readable. I have also greatly decreased the size of the test files. I am working in the io branch.

My proposals for improvement are:

  • Add ability to write CTF
  • Add ability to write CRC/CPR
  • Add a universal load_ebsd(fileName) method which calls the relevant loader depending on the file extension of the file name
  • Look into the possibility of a more fleshed out EBSD data object with instance variables - at the moment the object is just a container with dictionaries

Do let me know if you have any other requirements/restrictions/thoughts and i'll include them.

Add non-cubic slip systems

It would be good to add the ability to get slip traces and Schmid factor for non-cubic systems. I would be happy to help with this but we should agree how to do it first.

I would prefer to have a separate list, or dictionary of slip systems in a read-only (binary?) file that is read in and queried when slip activity calculations are required.

Green lagrange strain definition

Hi all,

Not sure if I can post this as an "issue", but I'm not very experienced with GitHub :)

I'm playing with some of the HRDIC data from "Harte et al, Acta, A statistical study of the relationship between plastic strain and lattice misorientation on the surface of a deformed Ni-based superalloy", and I was trying to reproduce the effective shear strain maps (using Matlab).

I didn't get the same effective shear strain magnitudes, so I started looking into the DefDAP code to see how the strains are exactly calculated (see below), and I don't understand how the "Green Strain" (which I assume to be Green Lagrange Strains, or "E") are computed. The definition of E should be: E = 0.5*(Ft * F - 1), with Ft the transposed of deformation gradient tensor F.

Shouldn't E11 for example then be: 0.5 * (self.f11 * self.f11 + self.f21 * self.f21 - 1) ? Or am I missing something?

Many thanks for the response!

Best regards,
Tijmen Vermeij
Eindhoven University of Technology

line 162 of hrdic.py:

    # Deformation gradient
    self.f11 = xDispGrad[1] + 1
    self.f22 = yDispGrad[0] + 1
    self.f12 = xDispGrad[0]
    self.f21 = yDispGrad[1]

    # Green strain
    self.e11 = xDispGrad[1] + \
               0.5*(xDispGrad[1]*xDispGrad[1] + yDispGrad[1]*yDispGrad[1])
    self.e22 = yDispGrad[0] + \
               0.5*(xDispGrad[0]*xDispGrad[0] + yDispGrad[0]*yDispGrad[0])
    self.e12 = 0.5*(xDispGrad[0] + yDispGrad[1] +
                    xDispGrad[1]*xDispGrad[0] + yDispGrad[1]*yDispGrad[0])
    
    # max shear component
    self.eMaxShear = np.sqrt(((self.e11 - self.e22) / 2.)**2 + self.e12**2)

Smaller sample data

It would be good if the sample dataset was much smaller. This would make testing much easier and quicker. Perhaps 2 datasets? A 10 by 10 pixel version for testing file reading and simple matrix methods and then a 200 by 200 pixel dataset since that should be enough to see a reasonable number of grains so can be used for larger integration tests/examples.

Is that experimentally valid or is the whole dataset required?

Can't type in Grain Inspector

Hi all,

My colleague and I are working on some analysis of HRDIC data with the use of the Grain Inspector interface. We are using the develop branch and everything is working fine for me in Mac. However, my colleague is using Windows 10 and she can't type in the text boxes included in the Grain Inspector window. She can click, but she cannot type to set the grain ID or the group number to run RDR. Surprisingly, she can still draw lines over the slip traces and save them. Also, the buttons work and she can go to the next grain and save lines. We have tried to set different matplotlib backends like %matplotlib qt or notebook, but only tk works in her computer.

We are using exactly the same code and data. Any idea of what can be wrong? Thanks in advance!!

Eugenia

Recent error with RDR

As discussed briefly earlier, a new error has appeared with using RDR in defDAP.

When I run the grain inspector tool I get

AttributeError: 'FigureCanvasTkAgg' object has no attribute 'set_window_title'

I thought that this might be something with tk, so I swapped to using qt and still get the same error. I'm not sure if this is caused by defdap or some external package as I've tried running this with a clean conda environment with a pip install directly and by cloning the develop branch from github and installing it locally and both still give the same error.

IPFs

I have uploaded example IPF data and a notebook to my repository. The IPFY does not match Channel5 but I can't figure out why - suggestions welcome - likely a math error when calculating the angles for IPFY.

I have included screenshots of IPFs from Channel5 data for comparison. Note that the channel5 .ctf in the example_data_AH/ipf directory only works in Mambo - I made the file myself so it has no spatial resolution and therefore it cannot be plotted it in Tango.

Add type hinting

Details here: https://docs.python.org/3/library/typing.html. Already finished the process on the following functions:

  • base.py
  • crystal.py
  • file_readers.py
  • file_writers.py
  • hrdic.py
  • inspector.py
  • plotting.py
  • quat.py
  • utils.py

I have also added a sphinx extension, which means that we no longer need the type in the arguments in docstring.

Note: If you want to specify a description of what is returned, then you also need the return type in the docstring (even if it's type hinted).
Note: Numpy are adding a typing library in 1.20.0, until then it is not possible to use 'np.array[Quat]' (for example) as a type, so I have not fully defined the type in this case and have left it in the docstring instead. In future, will be able do from np.typing import ArrayLike then ArrayLike[Quat] as a type hint.
Note: When referring to the parent class from a method type hint, a string literal needs to be used for forward reference in python <3.10. Postponed evaluation of annotations has been added in Python 3.10 (see https://www.python.org/dev/peps/pep-0563/).

Started in #79

Move sample data to a repository

I installed DefDap on a different computer today and it took so long tod download everything, primarily because the example data (which is needed) is relatively large. Could I suggest we move it to a repository, like Zenodo and then have a command to download it in the example notebook?

Consider using external quaternion library

One of the current performance bottlenecks in DefDAP is the implementation of quaternions. The Quaternion object is written in pure Python and not vectorised.

Would you consider adding an external dependency for quaternions? This one looks mature and seems to have all of the required features: https://github.com/moble/quaternion. They are vectorised in numpy arrays with ufuncs and have the core in C so are quite fast. Installation is just with pip (not all releases have binary wheels so use pip install numpy-quaternion --only-binary :all: to avoid having to compile).

If you are interested, I would first mock some quick benchmarks to see if they would be an improvement. Then if they are, I would look to finish making tests for all of the functions of the current Quaternion object and then write the same tests for the library method. Once we are happy that they are doing the same thing it would be possible to move across.

Clean up EBSD data?

Hi all,

I was wondering when using Defdap for the EBSD analysis, is there a way of denoising the data? Because inevitably there are some non-indexed regions in the maps and they'll need a certain level of clean-up. I've read the API documentation and seems there's not a function for denoising the data. Maybe I have missed something.

Any advice is much appreciated!

Many thanks,
Jie

HRDIC data loader not working for davis 10 files

Issues to fix for davis 10 format

  • Coordinate values are formatted as floats (even for int values)
  • Coordinate values are not necessarily integers (Only half integers because of odd sized integration windows?)
  • Coordinate values can -ve
  • Possibly different units for coordinates and displacements, assumed to be the same when calculating strain currently

origin
image001

Filtering of data

I have implemented a basic filtering algorithm in the filtering branch, which I have personally used. It's very basic but works well. It is based on simple thresholding of effective shear strain values, then binary dilation of the mask to ensure all erroneous data points are removed. I set these values to NaN, which works fine with current DefDAP functions, maybe not the RDR calculations.

I would be interested to hear if others might have a better method and the strategy for this in future.

Memory usage

Memory usage of DefDAP can be quite high. For two EBSD maps (admittedly quite large ones) - it occupies ~1.4 GB of memory. For two large HRDIC maps, it's using ~1.8 GB. This makes it tricky to compare lots of strain steps on a machine which doesn't have much RAM for example.

Might be worth changing the dtype for some of the data - for example, x coordinates read in from a DaVis DIC map are stored as float64 even though they are just unsigned integers. These occupy 0.7 GB of memory at the moment, for example. As do the displacements. At the moment, I'm deleting these since I don't actually need them. Perhaps there can be an argument like keepDisplacements, which removes the displacements if they are not needed?

Additionally, linking the DIC and EBSD maps increases usage by another 2 GB. When RAM is exhausted, calculations become very slow of course.

Euler map issue

My Euler maps for hcp alloys come out very green. There might be an error in how the colours are calculated.

image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.