Up next


Training GPT-4, And How OpenAI Uses W&B To Train All Their Models

3,179 Views
AI Lover
3
Published on 06/02/23 / In How-to & Learning

_At Fully Connected, the MLOps conference held by Weights & Biases, Peter Welinder discusses how OpenAI (developers of GPT-4 and DALL-E 2) uses Weights & Biases for model training, with an emphasis on sharing training runs, the ability to create reports, which different teams use heavily to document their hypotheses, experiments, and conclusions resulting in mini scientific papers that provide insights into the work being done at OpenAI._

*Transcript*

Lukas Biewald - Okay, and final question, since this is our user conference and you are such a high profile user of OpenAI, I'm curious if you could say a little bit about how OpenAI uses Weights & Biases and if you have a favorite feature or part of Weights & Biases, we'd love to know about that.

Peter Welinder - Yeah, I mean, we use it for pretty much all of our model training. So just tracking them, I think there's a lot of just sharing of the fact that you can easily share training runs and stuff like that. It's a super-used feature.

But I think one thing that these days I do way, way less of that sort of work. So one of the features I really like is the ability to have reports and so on where people... So we use that quite heavily.

It depends a little bit on the team, but a number of teams are using that quite heavily to kind of really have a clear hypothesis.

Here's the hypothesis.
Here are the experiments that were run to kind of validate or invalidate that hypothesis.
Here's the conclusion.

You have all these mini scientific papers, essentially, on all of the stuff that's happening at OpenAI, which is incredibly interesting to kind of follow along with.

Lukas Biewald - Fantastic. That sounds very interesting.

Show more
0 Comments sort Sort By

Up next