nunit / nunit Goto Github PK
View Code? Open in Web Editor NEWNUnit Framework
Home Page: https://nunit.org/
License: MIT License
NUnit Framework
Home Page: https://nunit.org/
License: MIT License
To differ the results on Win7x86 and Win7x64 the user needs these information in the result file.
So my idea was to extend the environment node in the result file
Obviously, the current implementation is the following:
It is possible to change the first two steps? The current behaviour is a problem for us. Because the property for the TestCaseSource should have some data from the database which the Connection is established in FixtureSetUp. The parameter vor the database, username, password and so on are set in the constructor of the class.
More info at https://bugs.launchpad.net/nunit-3.0/+bug/664018
There should be a way to add a TestCase conditioned on the platform.
Build NUnitLite using the monotouch profile. This requires a Mac.
The Values attribute should support enum types as argument.
Example:
[Test]
public void MyTest([Values(typeof(MyEnumType))]MyEnumType myEnumArgument)
{
...
}
Maybe even having argument is unnecessary. Attribute could just work if applied to test method's argument of any enum type.
Example:
[Test]
public void MyTest([Values]MyEnumType myEnumArgument)
{
...
}
I have been trying to find on how to test asynchronous code from F# without having to manually unwrap the async value, but rather having the test framework do that for you.
Since it seems 3.0 will bring with it improvements to async as well as the ability to parallelise tests, it would be great if there was some guidance on how to use the project with F#.
My question is simpler than this however; how do I create tests with F# that are async without wrapping them in TPL?
NUnit 2.5.2 and 2.5.3 run every method targeted by TestCaseSource, even if the test targeting it is Explicit or the test fixture is Explicit.
The behaviour I expect is: NUnit shouldn't run a test's TestCaseSource if the test itself won't be run.
Detail:
I have an expensive TestCaseSource generating a few hundred thousand permutations and combinations. I've applied Explicit to both the test targeting the source and the fixture surrounding both test and source.
NUnit console spends ten minutes needlessly constructing test cases from the TestCaseSource before ignoring them and proceeding with the non-Explicit tests.
If I throw NotImplementedException from the first line of the expensive TestCaseSource. NUnit runs the non-Explicit tests immediately. If I put a MessageBox.Show call in the first line, I see the message box.
More info at https://bugs.launchpad.net/nunit-3.0/+bug/538070
Problem: Using log4net with newer versions of nunit means that no
log file is created.
Demonstration follows.
I created the solution in VS 2008.
The OS is Windows XP.
I tested using both the nunit GUI and console.
I compared Nunit 2.2.5 with 2.5.5 (I suspect 2.5.0 will fail as well.)
The solution is created as a console app.
When the app is run as a console application a Log file is produced.
When the app is run using 2.2.5 a Log file is produced.
When the app is run using 2.5.5 no Log file is produced.
Repro details and other info at https://bugs.launchpad.net/nunit-3.0/+bug/605034
We need a Windows Phone 7 / .NET 3.7 build
In many cases a bug is reported to a project along with the unit test. The unit test needs to be integrated, but it may take some time before it can be resolved. In this case, it isn't clear what its state should be:
A [KnownBug] mark would semantically handle this, and would effectively create an "open bug list" within the test suite. It would also allow GUIs to visually indicate the state as distinct from [Ignore].
If this seems like a good idea I'll be happy to submit a patch.
See discussion at https://bugs.launchpad.net/nunit-3.0/+bug/1216854
In our framework we have a feature that is very important for us: Ignore can has two parameters description and enddate. When enddate is less then the current date, the test is not longer be igrnored and (normally) displayed as a test with error.
Currently, there is more than one part to change, because Ignore ist also used with TestCase, TestFisture and so on.
Ignore, IgnoreReason, IgnoreUpto, IgnoreReasonUppto will then be needed.
See discussion at https://bugs.launchpad.net/nunit-3.0/+bug/628881
First the problem:
We make a test, that could not be satisfied up to now, because the implementation of the functionality is not ready. We give him the attribute explicit, so the developer can run the test while he implements the functionality. But for the person who looks at the tests, he didn't see the test. So we make a empty copy of the test and give hin the attribute ignore.
If we give the test both attributes, explicit and ignore, the test is not runable under the gui.
A possibility would be that ignored tests can be run if a special option is set (perhaps on the tab for categories or command line).
This would be a very nice feature!
See discussion at https://bugs.launchpad.net/nunit-3.0/+bug/777810
As discussed on NUnit-Discuss:
https://groups.google.com/d/msg/nunit-discuss/3vfIpXbU7Jo/uuJ96Qri9csJ
Is.InRange Intellisense specifies direction-indepedent arguments "from" and "to" but behaves direction-dependent (e.g. "min" and "max").
Example:
Assert.That(5.5f, Is.InRange<float>(5.4f, 5.6f));
passes, as 5.5 is between 5.4 and 5.6, but fails when the arguments are reversed:
Assert.That(5.5f, Is.InRange<float>(5.6f, 5.4f));
A nice feature would be a Version as a Attribute. When the assembly version is at least the value, the test is active. If the assembly version is to low, the test will be ignored.
The version could be the version of the assembly which contains the test or delivered by a public static class from a special property (as it is in testcasesource).
Testcase and Testfixture and so on must also have this feature.
See discussion at https://bugs.launchpad.net/nunit-3.0/+bug/628884
I need a new attribute similar to [Repeat] that starts the test on multiple threads concurrently. A similar attribute is available in MbUnit - ThreadedRepeat. The [ThreadedRepeat(5)] attribute will call the test 5 times in parallel, firing off a separate thread for each one.
I couldn't find the MbUnit documentation for the ThreadedRepeat attribute, but there is a blog post that explains it: http://darrell.mozingo.net/2008/05/30/mbunits-threadedrepeat-attribute/
CI server should build on each change, run tests and support all platforms.
Nested TestFixtures should be executed as sub-tests of the class that contains them.
This is based on and a necessary prerequisite for https://bugs.launchpad.net/nunit-3.0/+bug/782983
(NUnit 2.5.10.11092 wit GUI runner)
The behaviour of TearDown and TestFixtureTearDown in case of an Exception in the corresponding SetUp is described in the same way, but behaves differently.
So long as any SetUp method runs without error, the TearDown method is guaranteed to run. It will not run if a SetUp method fails or throws an exception.
So long as any TestFixtureSetUp method runs without error, the TestFixtureTearDown method is guaranteed to run. It will not run if a TestFixtureSetUp method fails or throws an exception.
I have the following two very simple classes:
[TestFixture]
public class Base
{
[TestFixtureSetUp]
public void TestFixtureSetUp()
{
Console.WriteLine("Running TestFixtureSetUp Base");
}
[SetUp]
public void SetUp()
{
Console.WriteLine("Running SetUp Base");
}
[Test]
public void TestBase()
{
Console.WriteLine("Running Test Base");
}
[TearDown]
public void TearDown()
{
Console.WriteLine("Running TearDown Base");
}
[TestFixtureTearDown]
public void TestFixtureTearDown()
{
Console.WriteLine("Running TestFixtureTearDown Base");
}
}
[TestFixture]
public class Derived : Base
{
[TestFixtureSetUp]
public new void TestFixtureSetUp()
{
Console.WriteLine("Running TestFixtureSetUp Derived");
}
[SetUp]
public new void SetUp()
{
Console.WriteLine("Running SetUp Derived");
}
[Test]
public void TestDerived()
{
Console.WriteLine("Running Test Derived");
}
[TearDown]
public new void TearDown()
{
Console.WriteLine("Running TearDown Derived");
}
[TestFixtureTearDown]
public new void TestFixtureTearDown()
{
Console.WriteLine("Running TestFixtureTearDown Derived");
}
}
Executing TestDerived works as expected:
Running TestFixtureSetUp Base
Running TestFixtureSetUp Derived
Running SetUp Base
Running SetUp Derived
Running Test Derived
Running TearDown Derived
Running TearDown Base
Running TestFixtureTearDown Derived
Running TestFixtureTearDown Base
Now I introduce an Exception in SetUp of Base:
[SetUp]
public void SetUp()
{
Console.WriteLine("Running SetUp Base");
throw new Exception("Exception in SetUp Base");
}
Executing TestDerived now gives this output:
Running TestFixtureSetUp Base
Running TestFixtureSetUp Derived
Running SetUp Base
Running TearDown Derived
Running TearDown Base
Test 'Avl.TestAutomationFramework.Infrastructure.UnitTests.TestDriver.Derived.TestDerived' failed: SetUp : System.Exception : Exception in SetUp Base
Base.cs(18,0): at Avl.TestAutomationFramework.Infrastructure.UnitTests.TestDriver.Base.SetUp()
Running TestFixtureTearDown Derived
Running TestFixtureTearDown Base
Since the behaviour of TearDown and TestFixtureTearDown in case of an exception in the corresponding SetUp function is described the same way, word by word, I would expect that if there was an Exception in TestFixtureSetUp Base, then also both TestFixtureTearDown Derived and TestFixtureTearDown Base will be called.
However, if I remove the Exception from SetUp Base again and instead add one to TestFixtureSetUp Base...
[TestFixtureSetUp]
public void TestFixtureSetUp()
{
Console.WriteLine("Running TestFixtureSetUp Base");
throw new Exception("Exception in TestFixtureSetUp Base");
}
... then running TestDerived gives the following output:
Running TestFixtureSetUp Base
Test 'Avl.TestAutomationFramework.Infrastructure.UnitTests.TestDriver.Derived.TestDerived' failed: TestFixtureSetUp failed in Derived
TestFixture failed: SetUp : System.Exception : Exception in TestFixtureSetUp Base
at Avl.TestAutomationFramework.Infrastructure.UnitTests.TestDriver.Base.TestFixtureSetUp() in D:\git\TestAutomationFramework_V2013\Projects\Infrastructure\UnitTests\TestDriver\Base.cs:line 13
In my DLL there are a few TestFixtures, divided to various categories with the Category attribute.
There's also one SetUpFixture that has a global SetUp method, and it's marked with Category("OneCategory").
When I run nunit (either console or GUI) with "/include=OneCategory", the global SetUp method runs, followed by all tests from OneCategory. This is great. However, when I run nunit with "/include=OtherCategory", the same global SetUp method still runs - followed by all tests from OtherCategory. Adding an explicit "/exclude=OneCategory" doesn't seem to help either.
I'd expect that category inclusion and exclusion would also affect the SetUpFixtures that run, not only TestFixtures that run.
See comment at https://bugs.launchpad.net/nunit-3.0/+bug/616226
When a user-defined class overrides Equals, we should use it even if the class implements IEnumerable, ICollection, etc.
See discussion at http://groups.google.com/group/nunit-discuss/browse_thread/thread/3ad31a263aaaba40?hl=en
Add constraints that test whether a dictionary contains a certain key and a certain value.
Based on request at https://bugs.launchpad.net/nunit-3.0/+bug/780607
I have a test fixture marked with:
[TestFixture]
[Explicit, Category("SomeCategory")]
Tests in this fixture are run when the console runner specifies "SomeCategory".
The tests in this fixture CONTINUE to be run when I add [Ignore] to the class. They only stop being run when I remove the [Explicit] attribute.
When Settings->Gui->Tree Display->Test structure->Flat list of TestFixtures is selected then
More info at https://bugs.launchpad.net/nunit-3.0/+bug/807873
NUnit version 2.5.7
Runner: Console and GUI
Feature Request:
I'd love to have the ability to override a test's result (e.g. Pass/Fail/Error) in the [TearDown] method along with the option to overwrite the error message.
Some more background:
I just started with a team that uses NUnit for running for automation testing. Given that NUnit is specifically built for unit testing, I've been surprised just how well it meets our needs. The only hangup I currently have is that we have certain classes of errors that only occur on threads other than the main thread. Without going into details, it's basically impossible for us respond to these errors on the main thread without adding code everywhere to poll if any of these errors have occurred, and if so, Assert.Fail(). Without detecting the errors at all, our code ends up asserting or throwing an exception for something else on the main thread which just ends up obfuscating the real problem in the resulting TestResult.xml file.
I understand why NUnit doesn't pick up assertions/exceptions on different threads, but we could easily workaround this if we had the ability to override the test result and error message in the [TearDown] method. At that point we'd know if any of these errors had occurred and could signal the test as failing and give a more appropriate message.
See discussion at https://bugs.launchpad.net/nunit-3.0/+bug/691455
It seems like having an [Issue] attribute, with the URL pointing to a bug tracker page related to the test, would be a useful thing...
See discussion at https://bugs.launchpad.net/nunit-3.0/+bug/1216855
I subclassed ExpectedException attribute passing to super constructor AssertionException type.
Now if I attibute my test method with both my subclass and ExpectedException(AssertionException) I get runner and attribute order dependent behaviour:
for:
<Test()> <ObservedBehaviour("Code generator produces duplicates.")> <ExpectedException(GetType(AssertionException))> _
Public Sub ObservedBehaviourAfterChangeTest()
yielding:
Observed behaviour has been changed. Please balance the value of the change with compatibility breach costs.
Originally observed behaviour: Code generator produces duplicates.
NUnit.Framework.AssertionException was expected
while for
<Test()> <ExpectedException(GetType(AssertionException))> <ObservedBehaviour("Code generator produces duplicates.")> _
Public Sub ObservedBehaviourAfterChangeTest()
returning:
NUnit.Framework.AssertionException was expected
The documentation deserves a clarification.
See discussion at https://bugs.launchpad.net/nunit-3.0/+bug/532536
The SimpleTestRunner performs a an odd skip of Exception processing when faced with a fully un-implemented EventListener. What happens is that my RunStarted throws an Exception and this gets caught end then RunFinished is called which then throws an exception -- the stack trace makes it look like the RunFinished was the culprit when in fact the RunStart began the exceptional behavior. This occurs mainly because the catch( Exception exception ) catches everything. Perhaps what it should do is catch NUnit exceptions and throw all other exceptions -- I'm not familiar enough with the architecture around Exception testing, Assert Exceptions, etc.
So here is the culprit code in SimpleTestRunner, Line 141, of release 2.5.5.101112.
public virtual TestResult Run( EventListener listener, ITestFilter filter )
{
try
{
log.Debug("Starting test run");
// Take note of the fact that we are running
this.runThread = Thread.CurrentThread;
listener.RunStarted( this.Test.TestName.FullName, test.CountTestCases( filter ) );
testResult = test.Run( listener, filter );
// Signal that we are done
listener.RunFinished( testResult );
log.Debug("Test run complete");
// Return result array
return testResult;
}
catch( Exception exception )
{
// RunStart actually threw the exception. so RunFinish doesn't make sense.
// RunFinish then throws an exception when really the first exception should be handled first.
// Signal that we finished with an exception
listener.RunFinished( exception );
// Rethrow - should we do this?
throw;
}
finally
{
runThread = null;
}
}
Cheers,
L
All support for generic test methods is missing in the compact framework builds, although generic fixtures are supported. Both CF 2.0 and 3.5 support generics, so we should as well, at least to whatever extent it's possible.
That said, if this requires significant effort, we may consider postponing it to a future release.
TestCaseSource always instantiates the object providing data using the default constructor. Parameterized fixtures usually don't have one and adding one would mean that no parameters were available to generate data.
Currently NUnit unrolls the InnerException property messages for an exception.
It would be nice if it also recursively unrolls the InnerExceptions (note the s) property of the System.AggregateException.
[This issue maps to Launchpad bug https://bugs.launchpad.net/nunit-3.0/+bug/1170927]
Currently the RequiresThread, RequiresSTA, RequiresMTA are all marked as not inheritable. In my case I have a base class for tests related to the UI, where I have a TestFixtureSetup method to make everything run. For this to work, however, the tests have to run in an STA thread. However, since the attributes are marked as not inheritable, this means I need to set them on every TestFixture. Much easier would be setting it once on the base class, and make the derived test fixture classes inherit that attribute.
Currently, it's only possible to do a release of NUnitLite on a machine with all the runtimes we support installed. This makes it difficult for contributors to handle releases. It would be better if the builds could be completed on different machines and then aggregated on a separate server.
The main changes needed to the build script to accomplish this are:
In order to make sure that all the builds for a release are all based on the same source, we will need to tag the published packages with a revision number and/or date.
To assert a property name, one can use the following
Assert.That (testDelegate,
Throws.ArgumentException.With.Property("ParamName").EqualTo ("name"));
or this one, as a refactor-friendly way
Assert.That (testDelegate,
Throws.ArgumentException.And.Matches<ArgumentException> (x => x.ParamName == "name"));
It works but the error text reported when the assert is not satisfied is rather generic:
Expected: <System.ArgumentException> and value matching lambda expression
There should be an easier way to express the intent and have a better error message, something like:
Assert.That (testDelegate,
Throws.ArgumentException.And.Property(ex => ex.ParamName).EqualTo ("name"));
with the error reporting:
Expected: Property lambda ex.ParamName with value "name"
but got
Property lambda ex.ParamName with value "myParam"
TestCaseData migh also benefit from this if it implemented the lambda property functionality.
As discussed here: https://groups.google.com/forum/#!topic/nunit-discuss/Fg0oFzR8owE
Currently, NUnit supports a number of generic constraints that define the type of the tested expression at compile time (if they succeed): Is.TypeOf<T>()
, Is.AssignableTo<T>()
, and Is.InstanceOf<T>()
. (Note that Is.AssignableFrom<T>()
is not a candidate for this feature request because it only defines a derived type!)
Those constraint factory methods currently return instances of non-generic expression types (e.g., ExactTypeConstraint, etc.) that inherit the ordinary, untyped constraint expression factory methods, so that you can write, for example: Is.TypeOf<object>().With.Property ("P").EqualTo (42)
. Note that when "With" is called, the type information specified at the TypeOf<T>
call gets lost and the Property constraint needs to take the property name by string rather than, for example, by expression.
Passing the property name as string is bad as an invalid name will not trigger a compile-time error and as automated refactorings (e.g., renaming the property) will not take notice of the property name.
Therefore, change TypeOf<T>
, AssignableTo<T>
, and InstanceOf<T>
to return generic subclasses of the contraint types (e.g., ExactTypeConstraint<T>
) that redefine Constraint.With to retain the type information. Provide generic variants of ConstraintExpression
and ResolvableConstraintExpression
to allow for nested strongly typed constraints. Then provide a generic version of the Property constraint factory method.
Here is what the code could look like for NUnit 2.6 (with non-breaking changes).
public class ExactTypeConstraint<T> : ExactTypeConstraint
{
public new ConstraintExpression<T> With { get { ... } }
}
public class ConstraintExpression<T> : ConstraintExpression
{
public new ConstraintExpression<T> With { get { ... } }
public new ConstraintExpression<T> And { get { ... } }
// ...
public ResolvableConstraintExpression<T> Property<TR> (Expression<Func<T, TR>> propertyAccessor);
}
public class ResolvableConstraintExpression<T> : ResolvableConstraintExpression
{
public new ConstraintExpression<T> With { get { ... } }
public new ConstraintExpression<T> And { get { ... } }
}
That way, one could write constraints similar to: Is.TypeOf<object>().With.Property (o => o.P).EqualTo (42).And.Property (o => o.P2).EqualTo (43)
The NUnitLite integrated runner supports some of the same features as the full nunit-console but uses different options to access them. To the extent possible, where the features are common, we should use the same options in both programs.
Ideally, NUnitLite commandline option processing should be converted to use Mono.Options at the same time. However, it turns out that mono.options won't compile in the compact framework, making it unsuitable for NUnitLite.
We'll need to give more thought to a common approach to commandline interpretation across all nunit projects.
The result XML should also contain the start and end times of a test execution. This is useful to match additional log data from external sources to failing tests, especially when executing long-running tests like system or user acceptance tests or tests that are executed unattended.
Proposed solution:
See comments at https://bugs.launchpad.net/nunit-3.0/+bug/1183722
Hi,
I have a web app with extensive automated testing. I have some installation tests (delete the DB tables and reinstall from scratch), upgrade tests (from older to newer schema), and then normal web tests (get this page, click this, etc.)
I switched from NUnit to MbUnit because it allowed me to specify test orders via dependency (depend on a test method or test fixture). I switched back to NUnit, and would still like this feature.
The current work-around (since I only use the NUnit GUI) is to order test names alphabetically, and run them fixture by fixture, with the installation/first ones in their own assembly.
setting Timeout Attribute for SetUpFixture has no effect.
We have a class which handle some preconditions for all tests, so timeout might be possible attribute too.
We should have a build version against MonoDroid.
In V2, when there are multiple SetUpFixtures, all but one is ignored. They should all be run, even if the order of execution is indeterminate.
I would expect the following test to pass:
[Test]
public void DtoTest()
{
var a = DateTimeOffset.Parse("2012-01-01T12:00Z");
var b = DateTimeOffset.Parse("2012-01-01T12:01Z");
Assert.That(a, Is.EqualTo(b).Within(TimeSpan.FromMinutes(2)));
}
NUnit.framework version 2.6.2.12296
NUnit currently has special code for comparing two DateTimes within a tolerance, but not for DateTimeOffset.
It happened to us that somehow there was a zero in an assertion message. This zero within the message, even embedded into a cdata-section within the resultfile caused a crash to all applications that tried to create a DOM, .NET XmlDocument class and eben XmlSpy application crashed. Of course a zero is not allowed in a xml file but this zero came from the assertion message and so went into the NUnit resultfile. NUnit should check if messages are really valid strings and if there is a disallowed zero, escape it or convert it to its string representation.
Add a method to Assert or TestContext to allow the user to cause the run to be paused while external action is taken. This is a general facility and will make it easier to do a number of things, including:
See discussion at https://bugs.launchpad.net/nunit-3.0/+bug/730891
If a test throws one of our custom exception types (e.g., InterfaceBrokerException) and you're running tests via the NUnit GUI, then NUnit shows the following instead of the exception that was actually thrown:
An unhandled System.Runtime.Serialization.SerializationException was thrown while executing this test : Unable to find assembly 'Profitstar.Library, Version=2008.2.338.27793, Culture=neutral, PublicKeyToken=null'.
This is because the tests get run in one AppDomain, and then the results are marshaled to the main AppDomain via .NET serialization; but the Profitstar.Library assembly isn't loaded into the main AppDomain (nor should it be, because then it couldn't be unloaded), so it can't deserialize the exception.
So we end up with no stack trace, no original exception message, and no idea of even what exception type got thrown.
Is there any way to circumvent the use of a separate AppDomain when using the NUnit GUI? If not, can something be added to configure that feature?
See discussion at https://bugs.launchpad.net/nunit-3.0/+bug/494119
I have hierarchy like data-driven test cases that’s 3 levels deep.
Currently the TestFixture can only be instantiated only a constant
number of times. How can I parameterize the TestFixture based on
source input, same like the TestCaseSource attribute?
More info at https://bugs.launchpad.net/nunit-3.0/+bug/505700
Before we upgraded to the mentioned version both the Debug.WriteLine and the Trace.WritLine had been written out to the output in version 2.5.9. But in the new version this is disabled by default. We found that checkbox which we can enabled this with. But there is no similar option in the nunit console. We need it because it is used in out test reult logging system.
See discussion at https://bugs.launchpad.net/nunit-3.0/+bug/1096902
Having parameterized tests is a very nice feature. Some additional possibilities for the attributes are needed:
Range with DateTime and Step as TimeSpan: Range (DateTime, DateTime, TimeSpan)
Range (long, long) without Step (implicitly 1) - or does this make conflicts with Range (int, int)?
unsigned variants
Range (Type sourceType, string sourceName, int start, int end) might also be nice. Than it is easy to use the same data for multiple tests, if sourceName implements IList. (like ValueSourceAttribute)
Range (string sourceName, int start, int end) see above
Random for long and float
unsigned variants
See discussion at https://bugs.launchpad.net/nunit-3.0/+bug/629496
Setting: NUnit 2.5.3, Console runner, .NET 3.5
TheoryAttribute does not work with generics:
[Datapoint]
public double[,] Array2X2 = new double[,] { { 1, 0 }, { 0, 1 } };
[Theory]
public void TestForArbitraryArray(T[,] array)
{
// ...
}
NUnit gives a warning saying "No arguments were provided".
See discussion at https://bugs.launchpad.net/nunit-3.0/+bug/537914
When you have a combinatorial attribute such as the following:
[Test, Combinatorial]
public void MyTest(
[Values(1,2,3)] int x,
[Values("A","B")] string s)
{
...
}
Sometimes there is an issue with a certain set of values and you need to debug it. Would be nice to just turn off the combinatorial feature (by not having the attribute!), so that this could work for debugging:
[Test, TestCase(3,"A")]
public void MyTest(
[Values(1,2,3)] int x,
[Values("A","B")] string s)
{
...
}
As it is now, you have to completely redo the signature (and put it back after fixing bug/test) or setup a complex break point, etc....
[Test, TestCase(3,"A")]
public void MyTest(
int x,
string s)
{
...
}
See discussion at https://bugs.launchpad.net/nunit-3.0/+bug/1022810
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.