Giter Club home page Giter Club logo

r2dbc-spi's Introduction

r2dbc-spi's People

Contributors

elefeint avatar ericbottard avatar gregturn avatar happyfeetw avatar imranbohoran avatar lukaseder avatar michael-a-mcmahon avatar mirromutth avatar mp911de avatar mrotteveel avatar nbenjamin avatar nebhale avatar oshai avatar rodrigolgraciano avatar spring-operator avatar squiry avatar sullis avatar ttddyy avatar uaihebert avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

r2dbc-spi's Issues

Allow consuming arbitrary result segments (rows, out parameters, update count, messages)

I'm assuming that if the statement's returning stream is finished, the Publisher returned by Statement#execute() will simply stop producing results.

But if there is a result, in order to distinguish whether it is an update count or a result set, should we simply try calling Result#getRowsUpdated(), and check if the resulting Publisher is empty (which would mean that there must be a result set)?

Maybe, it would be a bit nicer if there was a third method Result#getResultType() returning an enum that describes the result. Or, at least, specify the behaviour in the Javadoc

Define an approach for closeable ConnectionFactories

Some ConnectionFactory implementations (such as a pool or a factory that requires shared resources outside of the Connection scope) might have a need to be cleaned up if the factory is no longer in use. We should define how these ConnectionFactory should approach cleanup.

One idea would be implementing Closeable or AutoCloseable.

Add EventProvider API

Right now there isn't a way to signal when a connection is closed. There needs to be a callback that is called when a connection is closed so that you can perform cleanup actions in things like a connection pool.

Observability/Tracing hooks?

I love this new initiative. It shows a lot of promise. One area I see that can be improved is in the observability/traceability front. This includes both API and Agent based.

JDBC's API makes some decisions that complicate these goals. For example, the PreparedStatement interface provides no reference to the original SQL statement. We must maintain a weak map reference to the prepared statement to get that SQL statement.

I'm sure there are other aspects that can be improved here. This is just intended to start that discussion. @adriancole would be a great point of contact within Pivotal to discuss with as well.

Add a way to get a read only connection

public interface ConnectionFactory {

Setting mutability on an existing connection is way too late IMO to get a read only connection. This brings up another point which is how to provide multiple connections. PostgreSQL JDBC has a notion of providing multiple hosts/ ports in the URL but no notion of which ones are read only. This might be better handled with a custom connection factory, but I still think setting mutability on the transaction is too late.

BLOB/CLOB API

We should provide an API to interact with large chunks of binary and character data. We also should outline how drivers are supposed to handle large chunks of data. In a pull-oriented API, it is easy to skip BLOB/CLOB columns (read: not consume) if a row is consumed partially and skipped without reading such columns.
In an event-driven model, data is pushed into drivers without knowing whether these type of columns are going to be processed.

The first sketch for consumption BLOB/CLOB data could incorporate Publisher<ByteBuffer> and Publisher<CharBuffer>. On second thought, buffers are subject to be pooled so Publisher<T> and BiConsumer<T, ByteBuffer> could make more sense as the consumer function would copy the content to the buffer and release T (assuming T is pooled).

Improve RowMetadata for presence/absence check of columns

RowMetadata currently exposes getColumnMetadata(…) and getColumnMetadatas(). If a client wants to check if a column is part of the result, then it has to obtain getColumnMetadatas() and iterate over ColumnMetadata.

For a discovery-based client, that does not know the structure of a result, this mapping causes a significant overhead as the client cannot cache a previous metadata object. getColumnMetadata(…) on the other hand, throws an exception of the column at identifier (e.g. name of the column) is absent.

How about introducing a SortedSet<String> getColumnNames() as short-cut? This would solve at least three issues:

  1. Discovery of the number of columns in the row.
  2. Fast retrieval of column names without additional mapping.
  3. Clients can easily call getColumnNames().contains(…) to check whether a column exists.

A nice side effect is that drivers can implement naming rules themselves (e.g. via TreeSet and a Collation) and drivers can cache the result metadata based on vendor-specific rules.

See also spring-projects/spring-data-r2dbc#69 for a use-case.

Introduce R2DBC Exception Hierarchy

R2DBC should provide an exception hierarchy to simplify exception translation in clients and frameworks. This ticket is a follow-up to our weekly cally.

New Exceptions to introduce

Transient Exception

  • Transient Resource Exception
  • Rollback Exception (automatically rollback by the database because of a deadlock or other transaction serialization failures)
  • Timeout Exception (query timeout, login timeout)

Non-Transient

  • Non-Transient Resource Exception (e.g., protocol decoding errors, unsupported charset/decoding in the client. Connection should be closed as it might face a non-recoverable error)
  • Data Integrity Violation Exception (various data errors, such as to data conversion, div by zero, and invalid arguments to functions)
  • Permission Denied Exception (Login failure, access denied to objects)
  • Bad Grammar Exception (statement violates SQL syntax)

Usage in drivers

Drivers should subclass exceptions to improve traceability of failures, primarily when an application uses multiple drivers to indicate the origin.

Document exception usage

General guidelines

  • Prefer unchecked exceptions.
  • Throw exceptions as early as possible.
  • Exceptions can occur as part of a method invocation or be transported as an error signal.

General Exceptions

  • IllegalArgumentException (null arguments, arguments not valid for the current method call)
  • IllegalStateException (generic, illegal state)
  • UnsupportedOperationException (operation not available/not implemented/not supported)
  • IOException (low-level I/O failures)

Steps to fix

  • Claim this issue with a comment below and ask any clarifying questions you need.
  • Set up a repository locally following the Contributing Guidelines.
  • Try to fix the issue by creating code and documentation according to the description above.
  • Commit your changes and start a pull request.

Deliverables

  • Create exception hierarchies for Transient and Non-Transient exceptions.
  • Document newly introduced exceptions in asciidoctor spec.
  • Document how to use General Exceptions.

Spec documentation: Add detailed specification for Result and Row

We already have some documentation around Column and Row Metadata. We should explain how Result and Row behave when a Result is emitted and when execution happens.

Into this also plays connection occupancy as a single Connection can handle only a single conversation at a time.

In this context, we should specify which exceptions/errors are emitted before Result emission and which errors are emitted during Result consumption.

Positional Argument Binding Index

Hi,

One thing I find a bit awkward is arguments in the sql statement and the bind arguments. The arguments in a sql statement are indexed from 1 and the arguments that bound are indexed from 0
ie:

.createStatement("insert into foo values ($1, $2, $3, $4)")
                                .bind(0, v1)
                                .bind(1, v2)
                                .bind(2, v3)
                                .bind(3, v4)
                                .execute()

I think it might nice if the bindings started from same place - so they both start are 0 or 1.

Default implementation of returnGeneratedValues()

As identified in #42, returnGeneratedValues() is optional for databases that don't support value generation. By convention, optional APIs are implemented with default implementations that are overridden where appropriate.

returnGeneratedValues() should have a default implementation that is a no-op.

Explain transactional behavior in the context of reactive programming

In our weekly call we discussed net-effects of ACID properties when using reactive infrastructure components.

With reactive runtimes, we move congestion out of the JVM and leave concurrency to the database. We should explain the effects of isolation and locking in consideration of MVCC and illustrate how this affects applications that make use of transaction management to e.g. group multiple statements together or which wait for other transactions to complete.

This is an investigation and documentation task.

Reinstate executeReturningGeneratedKeys()

Per the discussion in #7, let's reinstate this as part of the API. I believe that we should also do the originally proposed change to add column names.

Publisher<? extends Result> executeReturningGeneratedKeys(String... columns);

Define consistent results for INSERT/UPDATE statements

Currently, Spring Data R2DBC has been built using r2dbc-postgres and thus on this assumption...

	@Test
	public void insertTypedObject() {


		LegoSet legoSet = new LegoSet();
		legoSet.setId(42055);
		legoSet.setName("SCHAUFELRADBAGGER");
		legoSet.setManual(12);


		DatabaseClient databaseClient = DatabaseClient.create(connectionFactory);


		databaseClient.insert().into(LegoSet.class)//
				.using(legoSet).exchange() //
				.flatMapMany(it -> it.extract((r, m) -> r.get("id", Integer.class)).all()).as(StepVerifier::create) //
				.expectNext(42055).verifyComplete();


		assertThat(jdbc.queryForMap("SELECT id, name, manual FROM legoset")).containsEntry("id", 42055);
	}

The extract() operation being applying to the SqlResult presumes the inserted row is being returns. Perhaps because r2dbc-postgresql does that?

Testing out the same code using r2dbc-h2, this test case fails, because the ONLY thing returned (currently) are the rows updated. I had to alter the test circumstance as follows to make the test case pass:

		databaseClient.insert().into(LegoSet.class)
				.using(legoSet).exchange()
				.flatMapMany(mapSqlResult -> mapSqlResult.rowsUpdated())
				.as(StepVerifier::create)
				.expectNext(1)
				.verifyComplete();

Is this:

A) Perfectly acceptable given variations between platforms?

or

B) Something that needs to be standardized and captured in the SPI?

Based on @mp911de 's comments on #17, perhaps we don't want to reign in any particular RDBMS on something like this?

Add a io.r2dbc.spi.Savepoint type to allow for unnamed savepoints

As a frequent user of JDBC, I find JDBC's ability of creating unnamed savepoints very convenient. This seems to be lacking in the current R2DBC SPI. I would expect the following Connection API (removed Javadoc and other methods for simplicity):

public interface Connection {
    Publisher<Savepoint> createSavepoint();
    Publisher<Savepoint> createSavepoint(String name);
    Publisher<Void> releaseSavepoint(Savepoint savepoint);
    Publisher<Void> rollbackTransaction();
    Publisher<Void> rollbackTransactionToSavepoint(Savepoint savepoint);
}

See also this discussion:
https://groups.google.com/forum/#!topic/r2dbc/QZpTpQtj1HA

R2DBC artifacts should include license and notice files

R2DBC artifacts should include NOTICE and LICENSE files in their binary and source archives to comply with Apache license requirements. Currently, none of these files are included (same for driver implementations).

Screenshot of the SPI JAR structure:

screenshot 2019-01-03 13 36 37

Introduce Connection.isValid() method

We should provide a method to test connection validity (connection is open and functional) on the Connection level. This is useful to test connection liveness without the need to specify a validation-query that might be vendor-specific SQL. Drivers might have a better option to test connection validity than using pure SQL.

The method would look like:

Publisher<Void> isValid()

We could also include a notion of validation depth (is connected, is the server working) to run validation on two different levels.

This would result in a method signature like:

Publisher<Void> isValid(ValidationDepth) where ValidationDepth is an enum.

executeReturningGeneratedKeys() should allow keys to be specified

Currently, the Statement#executeReturnGeneratedKeys() method forces implementations to return all generated keys regardless of what a user would like. However, as pointed out by @davecramer, this could potentially be the entire row in PostgreSQL. Both PostgreSQL and H2 have support for specifying a subset of these keys using column names, and this seems like a reasonable requirement to include.

I propose changing Statement#executeReturnGeneratedKeys() to

Publisher<? extends Result> executeReturningGeneratedKeys(String... columns);

Given a collection of column names, only the keys from those columns should be returned. If the collection is empty, all possible keys should be returned.

Introduce R2DBC Connection URLs

R2DBC should allow URL-based configuration as this mechanism has proved to be useful for a lot of data access technologies (JDBC, MongoDB, Redis, …).
URL-based configuration makes it very convenient to configure a driver as it carries already significant details about the connection endpoint, protocol, and potential, additional options.

R2DBC SPI should provide a basic URL specification and parsing implementation to parse R2DBC URL's into ConnectionFactoryOptions.

API to be added

  • Add static parse(String) and parse(URI) methods to ConnectionFactoryOptions returning ConnectionFactoryOptions. This method implements parsing.
  • Add static get(String) method to ConnectionFactories returning ConnectionFactory as shortcut for drivers. This method invokes parsing and get(ConnectionFactoryOptions).

We should also document the URL configuration behavior within the spec documentation.

See also

Improved ColumnMetaData SPI

As discussed also on the mailing list, I have some suggestions of things I'd like to see in the io.r2dbc.spi.ColumnMetadata type. Missing properties are:

Mandatory:

  • scale
  • nullability

Optional:

  • length. JDBC encodes this in the precision, which isn't very clean. The SQL standard INFORMATION_SCHEMA dictionary views distinguish between precision and length
  • some way to access the qualified name. Currently, only the name / "label" is accessible, which is the mandatory information. But sometimes, having access to qualified names is useful as well, if such qualification is applicable. In that case, we'd need:
    • catalog name
    • schema name
    • table name

Other criticism:

  • type. This returns an Integer, which raises several questions
    • What is the value of this Integer? The same as java.sql.Types? In that case, that should be documented
    • Why not int? What would be the meaning of a null value here?
    • Why not Class<?>, as that is used by Row.get(Object, Class<T>), for example. The two methods are closely related. Having to manually translate from an int/Integer to the Class seems unnecessary.

Add documentation for R2DBC

We should provide some documentation around R2DBC so we end up with a specification that covers the SPI, behavior, and constraints of R2DBC.

How to deal with Publishers returning Void?

Hi,

I understand that r2dbc mainly deals and is based on Project Reactor. That is fine.

However, as the API exposes only reactive streams, it's very easy to use it with RxJava2, like in the following example:

package io.r2dbc.h2;

import io.r2dbc.h2.util.H2DatabaseExtension;
import io.reactivex.Single;
import reactor.core.publisher.Mono;

import org.junit.jupiter.api.AfterEach;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.extension.RegisterExtension;

final class ExamplesWithRxJava2 {

	private static final String DATABASE_NAME = "mem:r2dbc-examples";
	private static final String JDBC_CONNECTION_URL = "jdbc:h2:" + DATABASE_NAME;

	@RegisterExtension
	static final H2DatabaseExtension SERVER = new H2DatabaseExtension(JDBC_CONNECTION_URL);

	private final H2ConnectionFactory connectionFactory = new H2ConnectionFactory(
		Mono.defer(() -> Mono.just(DATABASE_NAME)));

	@BeforeEach
	void createTable() {
		SERVER.getJdbcOperations().execute("CREATE TABLE test ( value INTEGER )");
	}

	@AfterEach
	void dropTable() {
		SERVER.getJdbcOperations().execute("DROP TABLE test");
	}

	@Test
	void close() {
		Single.fromPublisher(this.connectionFactory.create())
			.flatMap(connection -> Single.fromPublisher(connection.close().then(Mono.empty()))).subscribe();
	}
}

Which I created based on @gregturn's work in r2dbc-h2. I just added

<dependency>
        <groupId>io.reactivex.rxjava2</groupId>
	<artifactId>rxjava</artifactId>
	<version>2.2.2</version>
	<scope>test</scope>
</dependency>

This simple test will fail with a java.util.NoSuchElementException: The source Publisher is empty. That will happen with or without the closing empty (non emitting) Mono (It's duplicated anyway, as the H2 implementation emits an empty Mono as well).

The reason behind this is how RxJava2 deals with Nulls, see:
https://github.com/ReactiveX/RxJava/wiki/What%27s-different-in-2.0#nulls

I have been looking around for a solution myself for my reactive Neo4j client proposal and ended up with something like a "Void signal", see https://github.com/michael-simons/neo4j-reactive-java-client/blob/master/client/src/main/java/org/neo4j/reactiveclient/DefaultNeo4jClientImpl.java#L65

So: Maybe this is an issue for you, maybe not. I'd be happy about your feedback nevertheless, ie. what's your take on this to ensure compatibility with other implementors of reactive streams.

Thanks.

Specify transaction/auto-commit behavior

We should specify in which state a pristine connection is created and how to interact with auto-commit mode. This should be also reflected in the spec documentation along with SPI methods to set and query auto-commit state.

Introduce possibility to retrieve generated keys with statement execution

Along with #17, we decided to remove executeReturningGeneratedKeys() as we've seen key generation only from Postgres and SQL Server perspectives in which returning generated keys is just a matter of SQL.

We've learned from a discussion on the mailing list that this assumption isn't necessarily true for other databases:

  • H2 internally sets a flag to retrieve generated keys
  • Informix needs to parse protocol frames to report generated keys

Pure SQL approaches:

  • Postgres: Append RETURNING *
  • SQL Server: Append SELECT SCOPE_IDENTITY()
  • DB2: Wrap statement with SELECT id FROM NEW TABLE (…)

We should investigate further and re-introduce a possibility to obtain generated keys.

R2DBC API: Provide Row.get(Object identifier)

From @mp911de on June 7, 2018 15:24

When reading and mapping data into objects, it's common to have types which differ between the model and its persistent representation. We should have a method to read values and let the driver determine the data type.

Note: Ideally, we have Row.get(String identifier) and Row.get(int index) methods to tighten up the API contract and resolve ambiguities over the identifier.

Copied from original issue: r2dbc/r2dbc-client#7

Null values should return IllegalArgumentException

Currently the spec mandates that all unanticipated null values throw NullPointerExceptions. This is to make the use of Objects.requireNonNull() easy. The problem is that this isn't the right exception to be thrown as it doesn't make sense. Instead, the spec should mandate throwing IllegalArgumentException, a more sensible exception.

Review usage of recursive generics in Statement and Batch

While upgrading from M5 to M6, I've noticed the effects of this change:
983c278

Both Statement and Batch have now been re-designed to use a recursive generic type variable:

public interface Statement<SELF extends Statement<SELF>> { ... }
public interface Batch<SELF extends Batch<SELF>> { ... }

The obvious convenience is for all final subtypes of these types to be able to covariantly return themselves to create "improved" fluent APIs. The relatively low benefit is countered by a very high price, which is the fact that the types themselves become virtually unusuable in client code, as the recursive type bound can never be captured with wildcards, only with generic method type variables. This makes client code quite ugly, difficult to work with.

I suggest rolling back this change, which wasn't strictly necessary for #3 and #6. Recursive generics are more trouble than benefit, and useful only in internal type hierarchies, when the top level type isn't really used by client code. Even then, I'd be very careful using them.

Object vs. int and String identifier in bind(Object, …) and get(Object) methods

Part of R2DBC tour feedback was: Why do we declare Object identifiers in Row.get(…) and Statement.bind(…) methods instead of Row.get(String, … and Row.get(int, …).

Columns are either accessed by name (ambiguities are possible as result sets can contain duplicate column names) or by index. Do we want to refine our methods to accept either column names (parameter placeholder names in case of Statement.bind(…)) or indexes instead of Object?

Originally reported by: @struberg

Add Row.get methods that accept a ParameterizedTypeReference<T>

Row.get currently accepts a Class to determine the type of the item being returned. Consider adding overloads to accept a ParameterizedTypeReference so that information about parameterized types is retained at runtime. This would make it possible to define codecs that differentiate between List and List for example.

Remove textual values from IsolationLevel enum

Since the text of the enum https://github.com/r2dbc/r2dbc-spi/blob/master/r2dbc-spi/src/main/java/io/r2dbc/spi/IsolationLevel.java is principally for writing SQL calls (at least in Postgres), and not all data stores use this query to lookup Isolation Level, we should remove it from the the SPI and let the data store define it.

Case in point: H2's command to lookup/set Isolation Level is completely different.

Would it be better to have an SPI-define function that performs the look up and returns an IsolationLevel enum? That would be quite handy and better fit the fast that many data stores will be doing this anyway.

ConnectionFactory discovery

We should provide means to discover connection factories so that clients can easily take a connection specification and then look up and configure the appropriate connection factory.

Using JDBC provides some convenience in terms of providing a connection URL and the infrastructure components figure out, which driver to use. We do not want to include URL parsing in R2DBC drivers yet we want to get a chance to retain convenience from a user's perspective to potentially reuse a JDBC URL and a client component should be able to determine the driver and configure the connection factory.

Add configurable fetch size on Statement

Statements that are executed using a cursor require a configurable fetch size to optimize fetching behavior for applications. The fetch size can also be used as an indicator whether a Statement should be executed by using cursors or whether an application (leaving fetch size unconfigured) wants to use direct execution (if supported by vendors).

Deriving fetch size from the demand value is risky because operators (e.g. handler(), filter()) may drop elements and request(1). In the worst case, request(1) translates directly to cursor roundtrips fetching a single row.

Identifier Ambiguity

Currently, the bits of the API that request identifier always accept an Object. The original design choice behind this is that drivers were free to support anything they wanted to. It could be a parameter name, an Integer index, or anything else the driver wanted to support. It has been raised that this can lead to ambiguity and perhaps the API should be expanded to include a separate int-based equivalent for all identifiers.

Consider "unwrap()/isWrapperFor()" method

In JDBC,Wrapper interface is implemented(extended) by other interfaces in order to allow user to retrieve specific implementation.

Since SPI is defines as interface, user might need to work on actual implementation instances. Especially this is true when instance is wrapped by delegation or proxy.

Add diagnostic information to ConnectionFactoryProvider and ConnectionFactories

Hi, this is a first-timers-only issue. This means we've worked to make it more legible to folks who either haven't contributed to our codebase before or even folks who haven't contributed to open source before.

If that's you, we're interested in helping you take the first step and can answer questions and help you out as you do. Note that we're especially interested in contributions from people from groups underrepresented in free and open source software!

If you have contributed before, consider leaving this one for someone new, and looking through our general ideal-for-contribution issues. Thanks!

Background

R2DBC's ConnectionFactories is the gateway to obtain a ConnectionFactory from a ConnectionFactoryOptions object. ConnectionFactoryOptions is a configuration object that holds the connection configuration in order to obtain a ConnectionFactory from the appropriate driver.

Problem

R2DBC uses ConnectionFactoryProvider through Java's Service Provider (ServiceLoader) mechanism to look up a ConnectionFactoryProvider that supports the requested configuration (ConnectionFactoryOptions). If none of the available ConnectionFactoryProvider supports the configuration, it is difficult to debug what happened and why this happened. A typical example is a spelling mistake in the driver or not knowing exactly which drivers are on the classpath.

Solution

io.r2dbc.spi.ConnectionFactoryProvider should expose a method String getDriver() method that returns the driver identifier used by a particular driver. That is the value, that one would use with ConnectionFactoryOptions.DRIVER.

ConnectionFactories should collect available drivers during the configuration. If no driver supports ConnectionFactoryOptions through ConnectionFactories.get(…), then the exception message should mention the available drivers and it should mention if no driver was available at all.

Steps to fix

  • Claim this issue with a comment below and ask any clarifying questions you need.
  • Set up a repository locally following the Contributing Guidelines.
  • Try to fix the issue following the steps above.
  • Commit your changes and start a pull request.

Deliverables

  • Changed ConnectionFactoryProvider interface.
  • Changed ConnectionFactories interface.

Note: R2DBC SPI has no tests and no test infrastructure so tests are currently not implemented.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.