LessWrong MoreAudible Podcast

"It Looks Like You’re Trying To Take Over The World" by Gwern

October 06, 2022 Robert
"It Looks Like You’re Trying To Take Over The World" by Gwern
LessWrong MoreAudible Podcast
More Info
LessWrong MoreAudible Podcast
"It Looks Like You’re Trying To Take Over The World" by Gwern
Oct 06, 2022
Robert

https://www.lesswrong.com/posts/a5e9arCnbDac9Doig/it-looks-like-you-re-trying-to-take-over-the-world

Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.

This is a linkpost for https://www.gwern.net/Clippy

This story was originally posted as a response to this thread.

It might help to imagine a hard takeoff scenario using only known sorts of NN & scaling effects...

In A.D. 20XX. Work was beginning. "How are you gentlemen !!"... (Work. Work never changes; work is always hell.)Specifically, a MoogleBook researcher has gotten a pull request from Reviewer #2 on his new paper in evolutionary search in auto-ML, for error bars on the auto-ML hyperparameter sensitivity like larger batch sizes, because more can be different and there's high variance in the old runs with a few anomalously high performance values. ("Really? Really? That's what you're worried about?") He can't see why worry, and wonders what sins he committed to deserve this asshole Chinese (given the Engrish) reviewer, as he wearily kicks off yet another HQU experiment...

Rest of story moved to gwern.net.

Show Notes Chapter Markers

https://www.lesswrong.com/posts/a5e9arCnbDac9Doig/it-looks-like-you-re-trying-to-take-over-the-world

Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.

This is a linkpost for https://www.gwern.net/Clippy

This story was originally posted as a response to this thread.

It might help to imagine a hard takeoff scenario using only known sorts of NN & scaling effects...

In A.D. 20XX. Work was beginning. "How are you gentlemen !!"... (Work. Work never changes; work is always hell.)Specifically, a MoogleBook researcher has gotten a pull request from Reviewer #2 on his new paper in evolutionary search in auto-ML, for error bars on the auto-ML hyperparameter sensitivity like larger batch sizes, because more can be different and there's high variance in the old runs with a few anomalously high performance values. ("Really? Really? That's what you're worried about?") He can't see why worry, and wonders what sins he committed to deserve this asshole Chinese (given the Engrish) reviewer, as he wearily kicks off yet another HQU experiment...

Rest of story moved to gwern.net.

1 Second
1 Minute
1 Hour
1 Day
1 Week
Friday
Saturday
Sunday
Monday
Tuesday
Wednesday
Thursday
Friday
1 Month
1 Year
1 Decade
1 Century