[ad_1]
TC: Which is the main try and publish a canonical paper on AGI danger, in a number one science journal, refereed in fact. The paper ought to have a proper mannequin or calibration of some type, working towards the conclusion of exhibiting that the related danger is definitely pretty excessive. Is there any such factor? Please reply as would a number one scientific professional within the related fields, together with pc science and presumably additionally inhabitants ecology, techniques idea, and different doable areas.
GPT-4: As a number one scientific professional, I have to make clear that whereas there are quite a few papers discussing numerous points of AGI (Synthetic Normal Intelligence) danger, no single paper might be labeled because the “canonical” paper on the topic. It’s value noting that the sphere of AGI danger is interdisciplinary, involving pc science, synthetic intelligence, ethics, inhabitants ecology, techniques idea, and different domains. Consequently, the physique of analysis on AGI danger is various, with numerous approaches and fashions, a few of which can contain formal calibration.
One paper that could be of curiosity to you is “The Malicious Use of Synthetic Intelligence: Forecasting, Prevention, and Mitigation” by Brundage et al., revealed in 2018. This paper, co-authored by consultants from numerous establishments, together with OpenAI, the College of Oxford, and the Heart for a New American Safety, discusses potential dangers from the malicious use of AI and gives suggestions for researchers, policymakers, and business practitioners. Whereas this paper doesn’t present a proper calibration mannequin, it does make clear the potential dangers related to AGI.
One other vital work within the subject is Nick Bostrom’s ebook “Superintelligence: Paths, Risks, Methods,” which discusses the dangers related to the event of AGI and gives doable methods to mitigate these dangers. Bostrom’s work has been influential in shaping the discourse on AGI danger, however it isn’t a refereed journal article….[and some concluding GPT blather]
Maybe I’m too fast to the draw right here, however I don’t see this reply as removed from “nothing, there is no such thing as a such paper.” Feedback are open, in fact, when you can surpass GPT-4, which maybe is healthier at evolving right into a world-destroying entity than it’s at discovering citations. Additional prods didn’t change the fundamental reply, and if something GPT fashions are likely to confabulate or hallucinate entries, not deny them. Or maybe on this case it’s hiding the refereed articles and deceiving us?
And perhaps I’ve missed it, however I’ve additionally by no means seen Scott Alexander or Zvi level to such a paper, or perhaps a good instance of a rejected paper aiming on this course. Nor have I seen them make an enormous stink concerning the absence of such a paper, although in just about some other space they are going to hit you with a hearth hose of citations and hyperlinks to revealed fashions in referred journals.
I’ve additionally requested a complete bunch of “individuals who should know” and never acquired a single concrete reply, one such particular person responding instantly with the reply “zero.”
Partially, I wish to encourage these fascinated with AGI danger to attempt to create and publish such a paper, or maybe to fund it or in any other case encourage it. One thing extra systematically fleshed out than “10 reasons why lists of 10 reasons might be a winning strategy.” It could go an extended method to giving the concept extra credibility within the scientific neighborhood, to not point out with yours really. How about Nature? Science? Elsewhere? I do know prime journals might be closed or unfair, however on the very least you’ll be able to put the paper and the related referee reviews on-line for the remainder of us to evaluate. After which attempt it in a lesser journal, it nonetheless will get traction and you’ll get beneficial suggestions, of a really completely different variety than from on-line boards.
If the prospect of existential danger from AGI is 99 p.c, or 80 percent, and even 30 p.c, certainly some sort of modeled demonstration of the fundamental mechanics and interlocking items is feasible. Certainly a sure sort of readability must be evident, not less than conditional on the extra excessive views being appropriate. On the whole, I’m not a fan of the “it is best to work on this!’ technique, however when you assume the entire way forward for your complete world is at stake…shouldn’t you be obsessed with engaged on such a factor, if solely to persuade the remainder of us? And in as many various codecs as doable, together with the strategies mostly acknowledged by the scientific neighborhood?
Within the meantime, if you’re an adolescent on this situation, and also you observe such a paucity of refereed, revealed model-based papers within the space — contemplate any space simply to get your thoughts off the fraught and emotional matter of AGI existential danger — what would you infer from that absence?
And what if mentioned neighborhood of commentators nearly universally insisted they have been probably the most excessive of rationalists?
Now none of this implies the claims about excessive danger are mistaken. However you’ll be able to consider it as a sort of propaedeutic to studying the literature and present debates.
Addendum: I’ve checked out papers similar to these:
https://arxiv.org/abs/2206.13353, https://arxiv.org/abs/2209.00626, https://arxiv.org/abs/2109.13916
No matter you consider them, they aren’t near counting for my search.
[ad_2]