Giter Club home page Giter Club logo

cachified's People

Contributors

adirishi avatar davidhoga avatar dependabot[bot] avatar dnlhc avatar kentcdodds avatar mannyv123 avatar michaeldeboey avatar richardscarrott avatar tapaibalazs avatar tearingitup786 avatar tomanagle avatar xiphe avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

cachified's Issues

Redis Cache support

Hello mate, thanks for this awesome package!

I'm reviewing the @kentcdodds implementation here so I was wondering if I can use your package instead with a Redis Cache as well. If so, how are you setting the ttl to the redis client?

Thanks for your help!

move into epicweb-dev?

Hi @kentcdodds,

Long term I'm looking to step down from maintaining open source projects for various personal reaons.

So I wanted to ask what you think about moving this into the @epicweb-dev org?

For now I'd happily continue maintaining this in either space but I'd appreciate if there would be some more potential maintainers.

TTL must be slightly longer than the time it takes for the promise to resolve.

In my use case I was trying to set ttl: 0 and staleWhileRevalidate: Infinity so that every request after the initial request would always trigger a refetch in the background, however in my tests, if the ttl value is less than the time the promise takes, it will ALWAYS refetch.

This is my example that always refetches.

  return await cachified({
    key: "test",
    cache: cache,
    getFreshValue: () =>
      new Promise((resolve) => {
        setTimeout(() => {
          console.log("resolving");
          resolve(new Date().toLocaleString());
        }, 5000);
      }),
    ttl: 3000,
    staleWhileRevalidate: Infinity,
  });

And if I change the promise timeout to 2000, it will always return the cached value and refetch in the background (after the ttl).

return await cachified({
    key: "test",
    cache: cache,
    getFreshValue: () =>
      new Promise((resolve) => {
        setTimeout(() => {
          console.log("resolving");
          resolve(new Date().toLocaleString());
        }, 2000);
      }),
    ttl: 3000,
    staleWhileRevalidate: Infinity,
  });

And because of this I can't set the ttl to 0 to get my desired outcome.

Let me know if you need any additional info, thanks for this library it's exactly what I've been looking for!

Make Cache type not use Value

Here's a copy/paste of some of my code:

import * as YAML from 'yaml'
import {markdownToHtmlUnwrapped} from './markdown.server'
import {cachified} from 'cachified'
import {downloadDirList, downloadFile} from './github.server'
import {typedBoolean} from './misc'
import type {Workshop} from '~/types'
import {cache, shouldForceFresh} from './cache.server'

type RawWorkshop = {
  title?: string
  description?: string
  meta?: Record<string, unknown>
  events?: Array<Omit<Workshop['events'][number], 'type'>>
  convertKitTag?: string
  categories?: Array<string>
  problemStatements?: Workshop['problemStatementHTMLs']
  keyTakeaways?: Workshop['keyTakeawayHTMLs']
  topics?: Array<string>
  prerequisite?: string
}

async function getWorkshops({
  request,
  forceFresh,
}: {
  request?: Request
  forceFresh?: boolean
}) {
  const key = 'content:workshops'
  return cachified({
    cache,
    key,
    ttl: 1000 * 60 * 60 * 24 * 7,
    forceFresh: forceFresh ?? (await shouldForceFresh({request, key})),
    getFreshValue: async () => {
      const dirList = await downloadDirList(`content/workshops`)
      const workshopFileList = dirList
        .filter(
          listing => listing.type === 'file' && listing.name.endsWith('.yml'),
        )
        .map(listing => listing.name.replace(/\.yml$/, ''))
      const workshops = await Promise.all(
        workshopFileList.map(slug => getWorkshop(slug)),
      )
      return workshops.filter(typedBoolean)
    },
    checkValue: (value: unknown) => Array.isArray(value),
  })
}

async function getWorkshop(slug: string): Promise<null | Workshop> {
  const {default: pProps} = await import('p-props')

  const rawWorkshopString = await downloadFile(
    `content/workshops/${slug}.yml`,
  ).catch(() => null)
  if (!rawWorkshopString) return null
  let rawWorkshop
  try {
    rawWorkshop = YAML.parse(rawWorkshopString) as RawWorkshop
  } catch (error: unknown) {
    console.error(`Error parsing YAML`, error, rawWorkshopString)
    return null
  }
  if (!rawWorkshop.title) {
    console.error('Workshop has no title', rawWorkshop)
    return null
  }
  const {
    title,
    convertKitTag,
    description = 'This workshop is... indescribeable',
    categories = [],
    events = [],
    topics,
    meta = {},
  } = rawWorkshop

  if (!convertKitTag) {
    throw new Error('All workshops must have a convertKitTag')
  }

  const [
    problemStatementHTMLs,
    keyTakeawayHTMLs,
    topicHTMLs,
    prerequisiteHTML,
  ] = await Promise.all([
    rawWorkshop.problemStatements
      ? pProps({
          part1: markdownToHtmlUnwrapped(rawWorkshop.problemStatements.part1),
          part2: markdownToHtmlUnwrapped(rawWorkshop.problemStatements.part2),
          part3: markdownToHtmlUnwrapped(rawWorkshop.problemStatements.part3),
          part4: markdownToHtmlUnwrapped(rawWorkshop.problemStatements.part4),
        })
      : {part1: '', part2: '', part3: '', part4: ''},
    Promise.all(
      rawWorkshop.keyTakeaways?.map(keyTakeaway =>
        pProps({
          title: markdownToHtmlUnwrapped(keyTakeaway.title),
          description: markdownToHtmlUnwrapped(keyTakeaway.description),
        }),
      ) ?? [],
    ),
    Promise.all(topics?.map(r => markdownToHtmlUnwrapped(r)) ?? []),
    rawWorkshop.prerequisite
      ? markdownToHtmlUnwrapped(rawWorkshop.prerequisite)
      : '',
  ])

  return {
    slug,
    title,
    events: events.map(e => ({type: 'manual', ...e})),
    meta,
    description,
    convertKitTag,
    categories,
    problemStatementHTMLs,
    keyTakeawayHTMLs,
    topicHTMLs,
    prerequisiteHTML,
  }
}

export {getWorkshops}

With the current implementation of cachified's types, getWorkshops returns a Promise<unknown> because my cache is implemented like so:

export const cache: Cache<unknown> = {
  name: 'SQLite cache',
  async get(key) {
    const result = await prisma.cache.findUnique({
      where: {key},
      select: {metadata: true, value: true},
    })
    if (!result) return null
    return {
      metadata: result.metadata,
      value: JSON.parse(result.value),
    }
  },
  async set(key, {value, metadata}) {
    await prisma.cache.upsert({
      where: {key},
      create: {
        key,
        value: JSON.stringify(value),
        metadata: {create: metadata},
      },
      update: {
        key,
        value: JSON.stringify(value),
        metadata: {
          upsert: {
            update: metadata,
            create: metadata,
          },
        },
      },
    })
  },
  async delete(key) {
    await prisma.cache.delete({where: {key}})
  },
}

The Cache<unknown> is required because I want to use this same cache for many different types, so I don't know (or care) what the type is. I think cachified was built assuming each cache would be independent for each type of thing you want to cache, but the original implementation I made was to be a generic function that could use the same cache to cache any number of types of things. So, I think what I'm trying to do should be supported.

I can fix this by changing one thing here:

-   cache: Cache<Value>;
+  cache: Cache<unknown>;

If that's the direction we go, then it would probably be even better to just not make Cache generic instead. I can't think of a situation where the cache needs to know or care about the type that's being cached.

How to deal with missing values with createBatch?

Hi, thanks for creating this very useful library!

In the following example:

import type { CacheEntry } from 'cachified';
import LRUCache from 'lru-cache';
import { cachified, createBatch } from 'cachified';

type Entry = any;
const lru = new LRUCache<string, CacheEntry<string>>({ max: 1000 });

function getEntries(ids: number[]): Promise<Entry[]> {
  const batch = createBatch(getFreshValues);

  return Promise.all(
    ids.map((id) =>
      cachified({
        key: `entry-${id}`,
        cache: lru,
        getFreshValue: batch.add(id),
      }),
    ),
  );
}

async function getFreshValues(idsThatAreNotInCache: number[]): Entry[] {
  const res = await fetch(
    `https://example.org/api?ids=${idsThatAreNotInCache.join(',')}`,
  );
  const data = await res.json();

  return data as Entry[];
}

Imagine a scenario where some of the IDs that were requested in the fetch request do not return any result and are therefore missing from the data array. How should we deal with these missing values? Should we add null to the array or perhaps undefined?

Thanks!

Adding a reporter changes return type of `getFreshValue` to `unknown`

Without the reporter:

import { LRUCache } from "lru-cache";
import { cachified, CacheEntry } from "cachified";

const lru = new LRUCache<string, CacheEntry>({ max: 1000 });

function getUser() {
  return cachified({
    key: `user`,
    cache: lru,
    async getFreshValue() {
           //    ^? (property) CachifiedOptions<{ name: string; }>.getFreshValue: GetFreshValue<{ name: string }>
      return { name: 'Max' }
    },
  });
}

With the reporter:

import { LRUCache } from "lru-cache";
import { cachified, CacheEntry, verboseReporter } from "cachified";

const lru = new LRUCache<string, CacheEntry>({ max: 1000 });

function getUser() {
  return cachified({
    key: `user`,
    cache: lru,
    reporter: verboseReporter(),
    async getFreshValue() {
           //    ^? (property) CachifiedOptions<unknown>.getFreshValue: GetFreshValue<unknown>
      return { name: 'Max' }
    },
  });
}

Reproduction: Typescript playground

Is this possibly related to #5?

Requests are not de-duplicated if there is no `ttl`

Hi ๐Ÿ‘‹, thanks for this lib.

I'm not entirely sure how request deduplication works in cachified (I don't think it's really documented?), but I've generally seen that getFreshValue is only invoked once when you call the function multiple times in the same window. From the readme example:

import { LRUCache } from 'lru-cache';
import { cachified, CacheEntry, Cache } from '@epic-web/cachified';

const lruInstance = new LRUCache<string, CacheEntry>({ max: 1000 });

function getUserById(userId: number) {
  return cachified({
    key: `user-${userId}`,
    cache: lru,
    async getFreshValue() {
      const response = await fetch(
        `https://jsonplaceholder.typicode.com/users/${userId}`,
      );
      return response.json();
    },
    ttl: 300_000,
  });
}

async function run() {
  getUserById(1);
  getUserById(1);
  const data = await getUserById(1);

  console.log(data);
}

run();

sandbox: https://codesandbox.io/s/elegant-worker-gjgj45?file=/src/index.ts

now here, we can see that getFreshValue is only called once, which is correct.


But what happens if we want the ttl to be determined by the response? To do this, we can change code to:

function getUserById(userId: number) {
  return cachified({
    key: `user-${userId}`,
    cache: lruInstance,
    async getFreshValue({ metadata }) {
      console.log("getFreshValue", userId);
      const response = await fetch(
        `https://jsonplaceholder.typicode.com/users/${userId}`
      );
+      // ttl determined by response
+      metadata.ttl = 300_000;
      return response.json();
    },
-    ttl: 300_000,
+    ttl: -1,
  });
}

We use this pattern to strictly derive ttl from Cache-Control headers. This works fine, caching wise: Requests are generally not cached, unless there is a Cache-Control header present on the response, which will then determine how long we should cache this response for.

But what it does to deduplication is that getFreshValue is now called 3 times

sandbox: https://codesandbox.io/s/elegant-worker-forked-5vzm4f?file=/src/index.ts

I'm not sure why this is the case - from my testing, it seems that I need to set a ttl that is longer than the time the request takes (which I can't know) to make deduplication work. Maybe it's related to #16 ?

Maybe there's another setting that I'm missing that can make request-deduplication work for simultaneously fired requests without setting an arbitrarily high ttl from the start?

Thanks ๐Ÿ™

[RFC - New Adapter] Create new Cloudflare KV cache adapter

Proposal

cachified already has a solid foundation as a generic key-value cache utility. The existing adapters will handle most scenarios where a distributed store is needed (with Redis). However for those building tools in the Cloudflare Ecosystem, Cloudflare KV is a fantastic distributed KV store, one with global low latency reads and very reasonable cost.

The goal here is to export a new adapter called cloudflareKvCacheAdapter which would be exported from the cachified package, allowing users to use Cloudflare KV as a datastore.

Usage Example

The API exposed by the KV adapter should closely mirror the existing setup for the existing adapters, e.g the redis cache.
Here is how I envision the KV cache would be setup in a sample worker script

// This is a sample Cloudflare worker script

import { cachified, Cache, cloudflareKvCacheAdapter } from 'cachified';

export interface Env {
  KV: KVNamespace;
  CACHIFIED_KV_CACHE: Cache;
}

export async function getUserById(userId: number, env: Env): Promise<Record<string, unknown>> {
  return cachified({
    key: `user-${userId}`,
    cache: env.CACHIFIED_KV_CACHE,
    async getFreshValue() {
      const response = await fetch(`https://jsonplaceholder.typicode.com/users/${userId}`);
      return response.json();
    },
    ttl: 300_000,
  });
}

export default {
  async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise<Response> {
    // It is a common pattern to pass around the env object to most functions when writing workers code
    // So it's convenient to inject the cache adapter into the env object
    env.CACHIFIED_KV_CACHE = cloudflareKvCacheAdapter({
      kv: env.KV,
      ctx: ctx,
      keyPrefix: 'mycache', // optional
    });
    const userId = Math.floor(Math.random() * 10) + 1;
    const user = await getUserById(userId, env);
    return new Response(`User data is ${JSON.stringify(user)}`);
  },
};

Making a PR

I wanted to ask the maintainers / contributors to this project, would you be willing to accept a PR to create the adapter described above in this project.
Happy to connect / have further discussions on anything that is of concern.

Really looking forward to hearing more. Genuinely love the package you've made, and I'm keen to make it accessible with CF KV.

Add support for redis-json

Hey there friend,

Would you be interested in adding support for RedisJson in cachified? Regarding implementations, I think we can pass in an additional argument to the createRedisAdapter (an options object) or create another adapter! Let me know your thoughts ๐Ÿ˜Š.

Example code:

// default to false for enableRedisJson
export declare function redisCacheAdapter(redisCache: RedisLikeCache, options?: { enableRedisJson?: boolean }): Cache;

Update the example in the docs to use a more "realistic" example

Just had a bit of confusion with the key, where it wasn't clear that it needs to be dynamic and was treated more like a namespace. It's probably not common but if someone does this it could lead to some really unexpected results.

For example, we want to cache a resource called user and the key was set to user instead of user_${userId}.

I'm happy to update the docs to either switch out the PI example or add another example.

Great library btw, going to keep us out of trouble :)

Add support for "soft purge"

More details here: https://developer.fastly.com/reference/api/purging/#soft-vs-hard-purge

I'm writing a blog post about caching and here's how I implement it in my simple example:

async function getEventAttendeeCount(eventId: string) {
	const event = await getEvent(eventId)
	return event.attendees.length
}

const attendeeCountCache = {}
type CacheOptions = { ttl?: number; swr?: number }
async function updateEventAttendeeCountCache(
	eventId: string,
	{ ttl = 1000 * 60 * 60 * 24, swr = 1000 * 60 * 60 },
) {
	attendeeCountCache[eventId] = {
		value: await getEventAttendeeCount(eventId),
		createdTime: Date.now(),
		ttl,
		swr,
	}
}

async function softPurgeEventAttendeeCountCache(eventId: string) {
	if (!attendeeCountCache[eventId]) return

	attendeeCountCache[eventId] = {
		...attendeeCountCache[eventId],
		ttl: 0,
		swr:
			attendeeCountCache[eventId].ttl +
			attendeeCountCache[eventId].createdTime,
	}
}

async function getEventAttendeeCountCached(
	eventId: string,
	{ forceFresh, ...cacheOptions }: CacheOptions & { forceFresh?: boolean } = {},
) {
	if (forceFresh) {
		await updateEventAttendeeCountCache(eventId, cacheOptions)
	}
	const cacheEntry = attendeeCountCache[eventId]
	if (!cacheEntry || cacheEntry.createdTime + cacheEntry.ttl < Date.now()) {
		const expiredTime = cacheEntry.createdTime + cacheEntry.ttl
		const serveStale = expiredTime + cacheEntry.swr > Date.now()
		if (serveStale) {
			// fire and forget (update in the background)
			void updateEventAttendeeCountCache(eventId, cacheOptions)
		} else {
			// wait for update
			await updateEventAttendeeCountCache(eventId, cacheOptions)
		}
	}

	return attendeeCountCache[eventId].value
}

So, all we do is update the ttl and swr values to ensure the ttl marks this cached value as expired, but also keep the swr time intact so the next call gets the cached value instantly, but also kicks off an update in the background.

This has the benefit of not immediately putting pressure on our server to update all cached values at once and instead can get them updated over time. Pretty slick.

Allow me to control the cache time based on the fresh value

Related to my use case in #24, if the user has a gravatar, then I'm happy to let that cache hang around for days. If they don't then I want to refresh it every 20 seconds. Maybe the ttl could accept a function which passes the current value if it's available and undefined if not?

Cachified doesn't work with Astro.build

I'd like to use cachified in an Astro app, which I believe has its own compiler. It works fine when running it in dev mode, but when you try to build and run it, there is an import error. I can resolve that by adding "type": "module" to the cachified package.json. From my experience of getting CJS and ESM to work together, I know that it gets very complicated very quickly, but I'm not sure where else to raise this.

The error is:

(node:61186) Warning: To load an ES module, set "type": "module" in the package.json or use the .mjs extension.
(Use `node --trace-warnings ...` to show where the warning was created)
/Users/charlie/dev/astro-cachified/node_modules/cachified/dist/index.js:4

...

SyntaxError: Unexpected token 'export'
    at Object.compileFunction (node:vm:352:18)
    at wrapSafe (node:internal/modules/cjs/loader:1033:15)
    at Module._compile (node:internal/modules/cjs/loader:1069:27)
    at Module._extensions..js (node:internal/modules/cjs/loader:1159:10)
    at Module.load (node:internal/modules/cjs/loader:981:32)
    at Module._load (node:internal/modules/cjs/loader:827:12)
    at ModuleWrap.<anonymous> (node:internal/modules/esm/translators:170:29)
    at ModuleJob.run (node:internal/modules/esm/module_job:198:25)
    at async Promise.all (index 0)
    at async ESMLoader.import (node:internal/modules/esm/loader:409:24)

You can reproduce this by installing the default astro project, and then import and use cachified

Adapter bug with `swr` set to `Infinity`

We have written our own adapter for ioredis and took inspiration from the official redis adapters provided in this repo.

Our use case is that we want to set swr to Infinity so that all keys should be stored in Redis forever and basically be updated in the background by cachified. The official repos does not seem to take swr into account when determining if the key in Redis should expire or not and we're wondering if there is a bug here or if there is something we're missing?

I'm providing a failing test case in a Pull Request to demonstrate my point.
#85

redisCacheAdapter does not cache if ttl is set because of the

I have tried out this library for caching API requests with Redis (v4+), using the redisCacheAdapter for adapter.
I followed the documentation on how to set up caching, but every request made through the getFreshValue method I passed in it.

After some debugging, I found that the cachified call suppresses an error when setting a value into redis, so I tried manually calling the adapter's set method:

await cachifiedCache.set('key', { value: 1, metadata: { ttl: 60000, createdTime: new Date().getTime() } })

That revealed the error, which was the following: ERR value is not an integer or out of range. Which was strange, since the ttl value I passed should not be problematic, so I checked out the adapter's source code and found how the expiration is set:

{
  EXAT: (ttl + createdTime) / 1000
}

Based on the above, I think redis needs an unix timestamp passed to the EXAT param, that is why the / 1000 is there, but if we come from a javascript time, that won't be an integer:

const time = new Date().getTime() // something like: 1670395771645

console.log((300_000 + time) / 1000) // will log 1670396071.645

I copied the redisCacheAdapter into my code and updated the setter to:

{
  EXAT: Math.round((ttl + createdTime) / 1000),
}

And now it works for me. :)

Let getFreshValue know whether the update is happening in the background or not

Here's what I've got:

function abortTimeoutSignal(timeMs: number) {
  const abortController = new AbortController()
  void new Promise(resolve => setTimeout(resolve, timeMs)).then(() => {
    abortController.abort()
  })
  return abortController.signal
}

async function gravatarExistsForEmail({
  email,
  request,
  timings,
  forceFresh,
}: {
  email: string
  request: Request
  timings?: Timings
  forceFresh?: boolean
}) {
  return cachified({
    key: `gravatar-exists-for:${email}`,
    cache: lruCache,
    request,
    timings,
    forceFresh,
    ttl: 1000 * 20,
    staleWhileRevalidate: 1000 * 60 * 60 * 24 * 365,
    checkValue: prevValue => typeof prevValue === 'boolean',
    getFreshValue: async () => {
      const gravatarUrl = getAvatar(email, {fallback: '404'})
      try {
        const avatarResponse = await fetch(gravatarUrl, {
          method: 'HEAD',
          signal: abortTimeoutSignal(1000 * 2),
        })
        return avatarResponse.status === 200
      } catch (error: unknown) {
        console.error(`Error getting gravatar for ${email}:`, error)
        return false
      }
    },
  })
}

I have the abortTimeoutSignal thing in place so gravatar won't cause me issues if my page is waiting on it. I think if my page is waiting on it I'd prefer that the timeout time be more like 500ms, but if the update is happening in the background (SWR) then I'm fine with it taking even 10 seconds. Thoughts? Maybe an argument passed to getFreshValue?

Performance: support batch-operations on caches

When working with batches, some caches could support batch get/set/delete operations. It would probably save bandwidth / time when cachified would support these operations and fall back to parallel single operations when not supported by the cache.

export interface Cache {
    name?: string;
    get: (key: string) => Eventually<CacheEntry<unknown>>;
    set: (key: string, value: CacheEntry<unknown>) => unknown | Promise<unknown>;
    delete: (key: string) => unknown | Promise<unknown>;
    batch?: {
      get: (keys: string[]) => Eventually<CacheEntry<unknown>[]>;
      set: (entries: { key: string, value: CacheEntry<unknown> }[]) => unknown | Promise<unknown>;
      delete: (keys: string[]) => unknown | Promise<unknown>;
    }
}

Optionally support zod validators

The type-safety of the lib bases on the assumption that

  1. The types of a given key do not change
  2. Nothing else writes the the cache key
  3. Or checkValue is implemented and has no bugs.

All in all not the most robust foundation to build upon ๐Ÿค”

Inspired by libs like trpc i thought maybe cachified could optionally use zod to validate types.

Not sure if this requires actual change or updating the recipe in readme is enough.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.