WEEKLY NEWSLETTER 10 - 15 APRIL 2023
Meeting This Week
2023/04/11 — 18:00-20:00 — April, Tue — Programming 1
2023/04/15 — 14:00-16:00 — April, Sat — Web Design 1
Meetings Next Week
2023/04/18 — 10:00-12:00 — April, Tue — Tuesday Group 1
Important message — Ed.
Hi, Club Members:
I'm not sure what the fate of our Tuesday meeting will be as there seems to be minimum interest from members.
However, I can prepare a half hour presentation/discussion at our next meeting to help fill in some of the time now that the computer problem has been fixed.
— Tim Kelly
Please come along and give us your ideas — Ed.
Schedule of Current & Upcoming Meetings
First Tuesday 18:00-20:00 — Main Meeting
First Saturday 13:00-14:00 — Penrith Group
Second Tuesday 18:00-20:00 — Programming
Third Tuesday 10:00-12:00 — Tuesday Group
Third Saturday 14:00-16:00 — Web Design
----------
2023/04/01 — 13:00-14:00 — April, Sat — Penrith Group (NO MEETING) 2
2023/04/04 — 18:00-20:00 — April, Tue — Main Meeting 1
2023/04/11 — 18:00-20:00 — April, Tue — Programming 1
2023/04/15 — 14:00-16:00 — April, Sat — Web Design 1
2023/04/18 — 10:00-12:00 — April, Tue — Tuesday Group 1
----------
1 As decided after assessing the Members' wishes (resumption of face-to-face meetings) via the latest Online Survey.
2 Penrith meetings are held every 2nd month on the 1st Saturday from 1-2 pm. — next meetings: May, July and September 2023.
ASCCA News:Tech News:
Bill Gates Doesn't Think You Can Pause AI, Even If You Want To
See the Gizmodo article by Nikki Main | Published Tuesday [ April 4 ] 2:10 pm.
Elon Musk and others have called for a pause on powerful AI systems but Bill Gates doesn't think that's the right way to go about things.
Microsoft founder and philanthropist Bill Gates spoke out against recent calls to pause Artificial Intelligence developments out of fear of the risk to society. Gates is now advocating for finding ways to use and embrace AI and says it isn't likely to go away.
He said in an interview with Reuters that it would be increasingly difficult to pause AI on a global scale and the world should instead focus on how AI can be used to better society. "I don't think asking one particular group to pause [AI] solves the challenges," Gates told the outlet on Monday. He added, "Clearly there's huge benefits to these things… what we need to do is identify the tricky areas."
Artificial intelligence can be used in many areas that would benefit things like improving education and climate change, Gates said in his blog. The importance of AI lies not just in improving specific areas, but should be used to "make sure that everyone — and not just people who are well-off — benefits from artificial intelligence," Gates wrote.
He explored other areas that could benefit from AI like the global healthcare system by freeing up time for workers for "things like filing insurance claims, dealing with paperwork, and drafting notes from a doctor's visit." Although there are many ways humans can benefit from AI, Gates acknowledged that the systems are imperfect, but those suggesting a pause should remember "Artificial intelligence still doesn't control the physical world and can't establish its own goals."
Gates' comments follow a recent letter from Elon Musk and other technologists, engineers, and AI ethicists who are suggesting a six-month pause for systems that are considered more powerful than OpenAI's GPT-4.
The letter claims "AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs," and calls AI an "out-of-control" race "that no one — not even their creators — can understand, predict, or reliably control."
Musk and others signed the letter calling for the government to step in and institute a moratorium if an immediate pause isn't carried out and made public and verifiable.
Gates said he disagrees with the requested pause, telling Reuters it won't stop the challenges the technology faces, but said the key is to focus on making it better. "This new technology can help people everywhere improve their lives," Gates said in his blog.
"At the same time, the world needs to establish the rules of the road so that any downsides of artificial intelligence are far outweighed by its benefits, and so that everyone can enjoy those benefits no matter where they live or how much money they have. The Age of AI is filled with opportunities and responsibilities."
While Gates doesn't see how a pause on the tech could be productive, Italy moved to temporarily block access to ChatGPT last week and the country's data privacy regulator said it would begin an investigation into the company behind the popular chatbot, OpenAI.
*** THE OPEN AI LETTER *** Pause Giant AI Experiments
See the FutureOfLife article by 17000 (and counting) Prominent Signatories to pause AI work.
AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1] and acknowledged by top AI labs.[2] As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.
Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk the loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system's potential effects. OpenAI's recent statement regarding artificial general intelligence states "At some point, it may be important to get an independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models." We agree. That point is now.
Therefore, we call on all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.
AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.[4] This does not mean a pause on AI development in general, merely stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.
AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.
In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should, at a minimum, include new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish natural from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.
Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an "AI summer" in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has paused on other technologies with potentially catastrophic effects on society.[5] We can do so here. Let's enjoy a long AI summer, not rush unprepared into a fall.
Some of the Signatories:
Yoshua Bengio, Founder and Scientific Director at Mila, Turing Prize winner and professor at the University of Montreal.
Stuart Russell, Berkeley, Professor of Computer Science, director of the Center for Intelligent Systems, and co-author of the standard textbook "Artificial Intelligence: a Modern Approach."
Elon Musk, CEO of SpaceX, Tesla & Twitter.
Steve Wozniak, Co-founder, of Apple.
Yuval Noah Harari, Author and Professor, Hebrew University of Jerusalem.
Emad Mostaque, CEO, Stability AI.
Andrew Yang, Forward Party, Co-Chair, Presidential Candidate 2020, NYT Bestselling Author, Presidential Ambassador of Global Entrepreneurship.
John J Hopfield, Princeton University, Professor Emeritus, inventor of associative neural networks.
Valerie Pisano, President & CEO, Mila.
Connor Leahy, CEO, Conjecture.
...
We have prepared some FAQs in response to questions and discussion in the media and elsewhere. You can find them here.
Notes and references:
[1]
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021, March). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency (pp. 610-623).
Bostrom, N. (2016). Superintelligence. Oxford University Press.
Bucknall, B. S., & Dori-Hacohen, S. (2022, July). Current and near-term AI as a potential existential risk factor. In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society (pp. 119-129).
Carlsmith, J. (2022). Is Power-Seeking AI an Existential Risk?. arXiv preprint arXiv:2206.13353.
Christian, B. (2020). The Alignment Problem: Machine Learning and human values. Norton & Company.
Cohen, M. et al. (2022). Advanced Artificial Agents Intervene in the Provision of Reward. AI Magazine, 43(3) (pp. 282-293).
Eloundou, T., et al. (2023). GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models.
Hendrycks, D., & Mazeika, M. (2022). X-risk Analysis for AI Research. arXiv preprint arXiv:2206.05862.
Ngo, R. (2022). The alignment problem from a deep learning perspective. arXiv preprint arXiv:2209.00626.
Russell, S. (2019). Human Compatible: Artificial Intelligence and the Problem of Control. Viking.
Tegmark, M. (2017). Life 3.0: Being Human in the Age of Artificial Intelligence. Knopf.
Weidinger, L. et al (2021). Ethical and social risks of harm from language models. arXiv preprint arXiv:2112.04359.
[2]
Ordonez, V. et al. (2023, March 16). OpenAI CEO Sam Altman says AI will reshape society, acknowledges risks: 'A little bit scared of this'. ABC News.
Perrigo, B. (2023, January 12). DeepMind CEO Demis Hassabis Urges Caution on AI. Time.
[3]
Bubeck, S. et al. (2023). Sparks of Artificial General Intelligence: Early experiments with GPT-4. arXiv:2303.12712.
OpenAI (2023). GPT-4 Technical Report. arXiv:2303.08774.
[4]
Ample legal precedent exists — for example, the widely adopted OECD AI Principles require that AI systems "function appropriately and do not pose unreasonable safety risk".
[5]
Examples include human cloning, human germline modification, gain-of-function research, and eugenics.
--------------------------
Fun Facts:
NASA engine capable of travelling at nearly the speed of light detailed in a new report
See the News.com article by Harry Pettit, October 17, 2019 — 10:22 am.
As more countries join the race to explore space, a NASA scientist has revealed a new propulsion method that could give his agency the edge.
The EM drive, a machine that could theoretically generate rocket thrust using rays of light. [ Later proved to be impossible. ]
A NASA scientist has devised plans for a bonkers new rocket engine that can reach close to the speed of light — without using any fuel.
Travelling at such speeds, the theoretical machine could carry astronauts to Mars in less than 13 minutes or to the Moon in just over a second.
However, the real purpose of the so-called "helical engine" would be to travel to distant stars far quicker than any existing tech, according to NASA engineer Dr David Burns.
The latest contender — The Helical engine.
Dr Burns, from NASA's Marshall Space Flight Centre in Alabama, unveiled the idea in a head-spinning paper posted to NASA's website.
"This in-space engine could be used for long-term satellite station-keeping without refuelling," Dr Burns writes in his paper.
"It could also propel spacecraft across interstellar distances, reaching close to the speed of light."
Travelling at these speeds, light would struggle to keep up with you, warping your vision in bizarre ways.
Everything behind you would appear black, and time would seem to stop altogether, with clocks slowing down to a crawl and planets seemingly ceasing to spin.
Dr Burns' idea is revolutionary because it does away with rocket fuel.
Today's rockets, like those built by NASA and SpaceX, would need tonnes of propellants like liquid hydrogen to carry people to Mars and beyond.
The problem is the more fuel you stick on the craft, the heavier it is. Modern propellant tanks are far too bulky to take on interstellar flights.
The helical engine gets around this using hi-tech particle accelerators like those found in Europe's Large Hadron Collider.
Tiny particles are fired at high speed using electromagnets, recycled around the engine, and fired again.
Using a loophole in the laws of physics, the engine could theoretically reach speeds of around 297 million metres per second, according to Dr Burns.
The contraption is just a concept for now, and it's unclear if it would work.
"If someone says it doesn't work, I'll be the first to say it was worth a shot," Dr Burns told New Scientist.
"You have to be prepared to be embarrassed. It is tough to invent something new under the sun and works."
In its simplest terms, the engine works by taking advantage of how mass changes at the speed of light.
In his paper, Dr Burns provides a concept to break this down that describes a ring inside a box, attached to each end by a spring.
When the ring is sprung in one direction, the box recoils in the other, as is described by Newton's laws of motion: Every action must have an equal and opposite reaction.
"When the ring reaches the end of the box, it will bounce backwards, and the box's recoil direction will switch too," New Scientist explains.
However, if the box and the ring travel at the speed of light, things work differently.
At such speeds, according to Albert Einstein's theory of relativity, as the ring approaches the end of the box, it will increase in mass.
This means it will hit harder when it reaches the end of the box, resulting in forward momentum.
The engine will achieve a similar feat using a particle accelerator and ion particles, but that's the general gist.
Chemical, nuclear and electric propulsion systems produce thrust by accelerating and expelling propellants," Dr Burns writes in his paper.
"Deep space travel is often a trade-off between thrust and large propellant storage tanks that eventually limit performance.
"The objective of this paper is to introduce and examine a unique engine that uses a closed-cycle propellant."
The design can produce a thrust up to 99 per cent of the speed of light without breaking Einstein's theory of relativity, according to Dr Burns.
However, the plan does breach Newton's law of motion — violating the laws of physics.
That's not the only thing holding the helical engine back: Dr Burns reckoned it would have to be 198 metres long and 12 metres wide to work.
The gizmo would also only operate effectively in the frictionless environment of deep space.
It may sound like a harebrained scheme, but engine concepts that do away with rocket fuel have been proposed before.
They include the EM drive, a machine that could theoretically generate rocket thrust using rays of light. The idea was later proved impossible.
"I know that it risks being right up there with the EM drive and cold fusion," Dr Burns told New Scientist.
[ And what about the small matter of ACCELERATION to these speeds? — Ed. ]
Meeting Location & Disclaimer
Bob Backstrom
~ Newsletter Editor ~
Information for Members and Visitors:
Link to — Sydney PC & Technology User Group
All Meetings, unless explicitly stated above, are held on the
1st Floor, Sydney Mechanics' School of Arts, 280 Pitt Street, Sydney.
Sydney PC & Technology User Group's FREE Newsletter — Subscribe — Unsubscribe
Go to Sydney PC & Technology User Group's — Events Calendar
Are you changing your email address? Would you please email your new address to — newsletter.sydneypc@gmail.com?
Disclaimer: We provide this Newsletter "As Is" without warranty of any kind.
The reader assumes the entire risk of accuracy and subsequent use of its contents.