OpenAI

  • Artificial Intelligence
    AI and Compute

    We’re releasing an analysis showing that since 2012, the amount of compute used in the largest AI...

  • Artificial Intelligence
    AI Safety via Debate

    We’re proposing an AI safety technique which trains agents to debate topics with one another, using a...

  • Artificial Intelligence
    Report from the OpenAI Hackathon

    On March 3rd, we hosted our first hackathon with 100 members of the artificial intelligence community. We...

  • Artificial Intelligence
    OpenAI Scholars

    We’re providing 6-10 stipends and mentorship to individuals from underrepresented groups to study deep learning full-time for...

  • Artificial Intelligence
    OpenAI Hackathon

    Come to OpenAI’s office in San Francisco’s Mission District for talks and a hackathon on Saturday, March...

  • Artificial Intelligence
    OpenAI Supporters

    We’re excited to welcome the following new donors to OpenAI: Jed McCaleb, Gabe Newell, Michael Seibel, Jaan...

  • Artificial Intelligence
    Interpretable Machine Learning through Teaching

    We’ve designed a method that encourages AIs to teach each other with examples that also make sense...

  • Artificial Intelligence
    Requests for Research 2.0

    We’re releasing a new batch of seven unsolved problems which have come up in the course of...

  • Deep Learning
    Scaling Kubernetes to 2,500 Nodes

    We’ve been running Kubernetes for deep learning research for over two years. While our largest-scale workloads manage...

  • Deep Learning
    Block-Sparse GPU Kernels

    We’re releasing highly-optimized GPU kernels for an underexplored class of neural network architectures: networks with block-sparse weights. Depending on the chosen sparsity, these kernels can run orders of magnitude faster than cuBLAS or cuSPARSE. We’ve used them to attain state-of-the-art results in text sentiment analysis and generative modeling of text and images....