WEEKLY NEWSLETTER 27 NOVEMBER - 02 DECEMBER, 2023
Hello and Welcome,
Meetings This Week
NO MEETINGS
Meeting Next Week
2023/12/05 — 18:00-20:00 — December, Tue — Main Meeting
Greetings, Members,
Face-to-face December, 2023, Main Meeting at the SMSA Building, Pitt Street, Sydney.
All Members are invited to a REAL meeting on DECEMBER 5th.
[ Members were invited to answer a poll to gauge the attendance numbers. ]
— Steve South (President)
Schedule of Current & Upcoming Meetings
First Tuesday 18:00-20:00 — Main Meeting
First Saturday 13:00-14:00 — Penrith Group
Second Tuesday 18:00-20:00 — Programming
Third Tuesday 10:00-12:00 — Tuesday Group
Third Saturday 14:00-16:00 — Web Design
----------
Go to the official Sydney PC Calendar for this month's meeting details.
----------
Penrith meetings are held every 2nd month on the 1st Saturday from 1-2 pm.
The following meetings are in January, March and May 2024.
ASCCA News:Tech News:
Optus CEO Kelly Bayer Rosmarin resigns after network outage
See The Guardian article by Josh Taylor | @joshgnosis | Mon 20 Nov 2023, at 15.49 AEDT.
Optus CEO Resigns
Optus parent company Singtel says 'priority is about setting on a path of renewal for the benefit of the community and customers'.
Kelly Bayer Rosmarin has resigned as the chief executive of Optus in the wake of the nationwide outage that took down phone and internet services for 14 hours close to two weeks ago.
In a statement released by Optus's parent company, Singtel, on Monday morning, Bayer Rosmarin said it was an appropriate time to step down, following her appearance at a Senate inquiry into the outage on Friday.
OpenAI restores Sam Altman as CEO after his tumultuous ouster
See the japan times article by REUTERS on Nov 22, 2023.
Sam Altman
SAN FRANCISCO — OpenAI on Tuesday said it reached an agreement for Sam Altman to return as CEO days after his ouster, capping frenzied discussions about the startup's future at the centre of the artificial intelligence boom.
In addition to Altman's return, the company agreed in principle to partly reconstitute the board of directors that had dismissed him. Former Salesforce co-CEO Bret Taylor and former U.S. Treasury Secretary Larry Summers will join Quora CEO and current director Adam D'Angelo, OpenAI said.
In a post on X, Sam Altman said, "I'm looking forward to returning to OpenAi."
His return caps a tumultuous weekend that saw Altman agree to move to OpenAI's financial backer, Microsoft, to head a new research team there. That followed a rejection by OpenAI's board of his first attempt to return to the startup on Sunday by naming ex-Twitch boss Emmett Shear as interim CEO.
In a post on X, Shear celebrated Tuesday's late-night outcome, which he said followed "~72 very intense hours of work."
Altman's dismissal had brought uncertainty for both OpenAI and Microsoft, which had moved quickly to carry out damage control over the weekend by vowing to hire him and Greg Brockman, president of the startup.
Brockman, who had quit after Altman was ousted, said in a post on X that he was "getting back to coding tonight."
Nearly all of OpenAI's more than 700-strong staff on Monday had threatened to leave unless the board stepped down and reinstated Altman and Brockman, according to a letter reviewed by Reuters.
In a statement on X, Microsoft CEO Satya Nadella welcomed the changes to OpenAI's board.
"We believe this is a first essential step on a path to more stable, well-informed, and effective governance," he said.
Online Tracking More Detailed Than Thought
See the InfoPackets article by John Lister on November 20, 2023, at 05:11 pm EST.
It's no secret that advertisers and other groups buy and sell data about people's Internet use. But a new report says the information is far more detailed and specific than realised.
The Irish Council for Civil Liberties (ICCL) says it's much easier than people realise to identify specific individuals, in some cases threatening national security. The data isn't hacked or stolen but made available to people bidding for online advertising slots and trying to reach a particular auction.
The basics of how this works are well known. Legitimate online businesses track users online but don't sell individual records. Instead, they'll label them as likely fitting particular groups, such as football team fans or new parents, based on their browsing history.
Advertisers can then target these groups, for example people who are likely golf fans, live near a specialist golf store and appear to buy high-end goods. In principle, tech firms try to balance the categorization so that groups are narrow enough to allow effective ad targeting but broad enough that identifying any individual is difficult.
Blackmail Made Easier
However, the ICCL investigation found that advertisers have more categories, covering more specific characteristics than widely assumed. That means the available information about any individual is much greater, making it easier to cross-reference with other details to identify them.
Some categories in ad data revealed by the ICCL were highly personal, including information about potentially embarrassing health conditions. Other categories were potentially a significant security risk for the individuals and companies.
These included people categorized as a judge, elected officials, national security workers, military personnel, military families, or counter-terrorism workers. If identified, some of these people could be threatened or blackmailed.
Russia Among Customers
The report also claims Google sends ad data to a Russian broker that tries to identify people who regularly visit websites politically opposed to the Russian government. (Source: iccl.ie)
Google and Microsoft responded to the report, saying they protected individuals' privacy when handling ad data. Microsoft said it complied with all laws, while Google said the way it makes data available to potential advertisers "simply [doesn't] allow bad actors to compromise people's privacy and security". (Source: ft.com) [ Subscribe to unlock this article — "Financial Times" ]
What's Your Opinion?
Are you surprised by the reports? How easily do you think a bad actor could gather information about you and identify you? Is it possible or desirable to have stricter laws on how tech companies share and combine data about online activity?
Comments
TikTok owned by Chinese — Submitted by Dennis Faas on Mon, 20/11/2023 — 18:13.
This is slightly off-topic but related. TikTok is owned by a Chinese firm (Bytedance), and it's been suggested that the algorithms determining what a user is "most likely interested in" are being tweaked to stir up unrest in countries like the USA. For example, it's been reported that millennials are likely to see pro-Hamas propaganda on TikTok, rather than support for Israel.
YouTube Adds 5-Second Delay to Punish Ad Blockers in All Browsers
See the How-To Geek article by ANDREW HEINZMAN | PUBLISHED 22 Nov 2023.
YouTube
The 5-second delay is not exclusive to Firefox users, says Google.
After a long day of anger and speculation, Google has finally commented on the "artificial wait" that some Firefox users are encountering on YouTube. This 5-second delay, which is visible in YouTube's code, is made to punish those who use ad blockers. It affects all browsers, not just Firefox.
Some YouTube users began encountering an odd video delay in mid-November. This phenomenon became controversial on November 19th when a Reddit user accused YouTube of artificially slowing load times in Firefox. The logic was pretty straightforward. Delayed load times were only experienced in Firefox, and switching Firefox's user agent to Chrome automatically resolved the problem.
Additional evidence came in the form of a short snippet of code — setTimeout(function() { c(); a.resolve(1) }, 5E3);. This code, baked into YouTube, proves the five-second delay is intentional. But Reddit users failed to see the big picture. This snippet of code cannot check which browser you're using. And when you look at the entire function that this code is a part of, you'll find that it does not include browser agent checks.
"To support a diverse ecosystem of creators globally and allow billions to access their favourite content on YouTube, we've launched an effort to urge viewers with ad blockers enabled to allow ads on YouTube or try YouTube Premium for an ad-free experience. Users with ad blockers installed may experience suboptimal viewing, regardless of their browser."
Before Google commented on this story, some people speculated that the five-second delay was associated with ad blocking. They were correct. When asked about the five-second delay, Google explained, "Users with ad blockers installed may experience suboptimal viewing, regardless of the browser they are using." This is corroborated by Firefox, which told 404 Media that the five-second delay affects all browsers.
As you may know, YouTube has spent the last few months cracking down on ad blockers. It wants customers to subscribe to YouTube Premium, which costs $13.99 a month (and includes a YouTube Music membership). The five-second delay is a crude way of ensuring that ads play. Switching a browser's user agent "resolves" the problem because it refreshes the webpage. YouTube doesn't need to serve an advertisement after you refresh, so it doesn't enforce the five-second delay.
This is a crude trick from Google. Ad-blocking services can get around the five-second delay with a simple filter. But Google is hunting for new ways to discourage ad blocker usage, and a YouTube Premium subscription may be worth the money if you can't tolerate these annoyances. As for the whole Firefox thing, Google knowingly reduced YouTube's performance on non-Chrome browsers in 2018, so we can't blame anyone for jumping to conclusions.
Source: Google via 404 Media
Fun Facts:
AI experts are increasingly afraid of what they're creating
See the Vox article by Kelsey Piper | Updated Nov 28, 2022, at 6:53 am EST.
AI Might Treat Us Like We Treat Animals
AI gets more intelligent, more capable, and more world-transforming every day. Here's why that might not be a good thing.
Part of The rise of artificial intelligence explained
In 2018, at the World Economic Forum in Davos, Google CEO Sundar Pichai had something to say: "AI is probably the most important thing humanity has ever worked on. I think of it as something more profound than electricity or fire." Pichai's comment was met with a healthy dose of scepticism. But nearly five years later, it's looking more and more prescient.
AI translation is now so advanced that it's on the brink of obviating language barriers among the most widely spoken languages on the internet. College professors are tearing their hair out because AI text generators can now write essays as well as your typical undergraduate — making it easy to cheat in a way no plagiarism detector can catch. AI-generated artwork is even winning state fairs. A new tool called Copilot uses machine learning to predict and complete lines of computer code, bringing the possibility of an AI system that could write itself one step closer. DeepMind's AlphaFold system, which uses AI to predict the 3D structure of almost every protein, was so impressive that the journal Science named it 2021's Breakthrough of the Year.
You can even see it in the first paragraph of this story, which was primarily generated for me by the OpenAI language model GPT-3.
While innovation in other technological fields can feel sluggish — as anyone waiting for the metaverse would know — AI is full steam ahead. The rapid pace of progress is feeding on itself, with more companies pouring more resources into AI development and computing power.
Of course, handing over vast sectors of our society to black-box algorithms that we barely understand creates many problems, which has already begun to help spark a regulatory response around the current challenges of AI discrimination and bias. But given the speed of development in the field, it's long past time to move beyond a reactive mode, one where we only address AI's downsides once they're clear and present. We must consider today's systems and where the entire enterprise is headed.
The systems we're designing are increasingly powerful and increasingly general, with many tech companies explicitly naming their target as artificial general intelligence (AGI) — systems that can do everything a human can do. But creating something smarter than us, which may have the ability to deceive and mislead us — and then just hoping it doesn't want to hurt us — is a terrible plan. We need to design systems whose internals we understand and whose goals we can shape to be safe ones. However, we don't understand the systems we're building well enough to know if we've designed them safely before it's too late.
There are people working on developing techniques to understand robust AI systems and ensure that they will be safe to work with. Still, right now, the state of the safety field is far behind the soaring investment in making AI systems more powerful, more capable, and more dangerous. As the veteran video game programmer John Carmack said in announcing his new investor-backed AI startup, it's "AGI or bust, by way of Mad Science!"
This particular mad science might kill us all. Here's why.
Computers that can think
The human brain is the most complex and capable thinking machine evolution has ever devised. It's why human beings — a species that isn't very strong, fast and tough — sit atop the planetary food chain, growing in number every year while so many wild animals careen toward extinction.
It makes sense that, starting in the 1940s, researchers in what would become the artificial intelligence field began toying with an irresistible idea: What if we designed computer systems through an approach similar to how the human brain works? Our minds are made up of neurons, which send signals to other neurons through connective synapses. The strength of the connections between neurons can grow or weaken over time. Connections that are used frequently tend to become stronger, and ones that are neglected tend to wane. Together, all those neurons and connections encode our memories, instincts, judgments and skills — our very sense of self.
So why not build a computer that way? In 1958, Frank Rosenblatt pulled off a proof of concept: a simple model based on a simplified brain, which he trained to recognize patterns. "It would be possible to build brains that could reproduce themselves on an assembly line and which would be conscious of their existence," he argued. Rosenblatt wasn't wrong, but he was too far ahead of his time. Computers weren't powerful enough, and data wasn't abundant, to make the approach viable.
It wasn't until the 2010s that it became clear that this approach could work on real problems and not toy ones. By then, computers were as much as 1 trillion times more powerful than in Rosenblatt's day, and there was far more data on which to train machine learning algorithms.
This technique — now called deep learning — started significantly outperforming other approaches to computer vision, language, translation, prediction, generation, and countless other issues. The shift was about as subtle as the asteroid that wiped out the dinosaurs, as neural network-based AI systems smashed every competing technique on everything from computer vision to translation to chess.
"If you want to get the best results on many hard problems, you must use deep learning," Ilya Sutskever — cofounder of OpenAI, which produced the text-generating model GPT-3 and the image-generator DALLE-2, among others — told me in 2019. The reason is that systems designed this way generalize, meaning they can do things outside what they were trained to do. They're also highly competent, beating other approaches in terms of performance based on the benchmarks machine learning (ML) researchers use to evaluate new systems. And, he added, "they're scalable."
What "scalable" means here is as simple as it is significant: Throw more money and more data into your neural network — make it bigger, spend longer on training it, harness more data — and it does better and better. No one has yet discovered the limits of this principle, even though significant tech companies now regularly do eye-popping multimillion-dollar training runs for their systems. The more you put in, the more you get out. That drives the breathless energy that permeates so much of AI right now. It's not simply what they can do but where they're going.
If there's something the text-generating model GPT-2 couldn't do, GPT-3 generally can. If GPT-3 can't, InstructGPT (a recent release, trained to give more helpful-to-humans answers than GPT-3 did) can. There have been some clever discoveries and new approaches, but for the most part, what we've done to make these systems smarter is to make them bigger.
One thing we need to be doing: is understanding them better. With old approaches to AI, researchers carefully sculpted rules and processes to evaluate the data they were getting, just as we do with standard computer programs. With deep learning, improving systems doesn't necessarily involve or require understanding what they're doing. A minor tweak often improves performance substantially, but the engineers designing the systems need to know why.
If anything, as the systems get bigger, interpretability — understanding what's going on inside AI models and making sure they're pursuing our goals rather than their own — gets harder. And as we develop more powerful systems, that fact will go from an academic puzzle to a vast, existential question.
Intelligent, alien, and not necessarily friendly
...
What's the worst that could happen?
...
Asleep at the wheel
...
Meeting Location & Disclaimer
Bob Backstrom
~ Newsletter Editor ~
Information for Members and Visitors:
Link to — Sydney PC & Technology User Group
All Meetings, unless explicitly stated above, are held on the
1st Floor, Sydney Mechanics' School of Arts, 280 Pitt Street, Sydney.
Sydney PC & Technology User Group's FREE Newsletter — Subscribe — Unsubscribe
Go to Sydney PC & Technology User Group's — Events Calendar
Are you changing your email address? Would you please email your new address to — newsletter.sydneypc@gmail.com?
Disclaimer: We provide this Newsletter "As Is" without warranty of any kind.
The reader assumes the entire risk of accuracy and subsequent use of its contents.