WEEKLY NEWSLETTER 04 - 09 MARCH, 2024
Hello and Welcome,
Meeting TODAY
2024/03/02 — 13:00-14:00 — March, Sat — Penrith Group
Meeting This Week
2024/03/05 — 18:00-20:00 — March, Tue — Main Meeting
Steve may be unable to attend the Main Meeting this week.
Here are the details for the Zoom meeting,
SPCTUG Meeting Host is inviting you to a scheduled Zoom meeting.
Topic: SPCTUG Main Meeting Zoom Meeting
Time: Mar 5, 2024 18:00 Canberra, Melbourne, Sydney
Join Zoom Meeting
https://us02web.zoom.us/j/84608773479
Meeting ID: 846 0877 3479
Passcode: SydPCMain
These details will be the same for all main meetings, including December, which will be live at the SMSA for a Christmas Party and on Zoom.
— Ed.
Meetings Next Week
2024/03/12 — 18:00-20:00 — March, Tue — Programming
2024/03/16 — 14:00-16:00 — March, Sat — Web Design
Schedule of Current & Upcoming Meetings
First Tuesday 18:00-20:00 — Main Meeting
First Saturday 13:00-14:00 — Penrith Group
Second Tuesday 18:00-20:00 — Programming
Third Tuesday 10:00-12:00 — Tuesday Group
Third Saturday 14:00-16:00 — Web Design
----------
Go to the official Sydney PC Calendar for this month's meeting details.
----------
Penrith meetings are held every 2nd month on the 1st Saturday from 1-2 pm.
The next scheduled meetings are in March, May and July 2024.
ASCCA News:Tech News:
AirJet's solid-state cooling could radically improve tomorrow's laptops
See the PCWorld article by Michael Crider | Staff Writer | PCWorld JAN 18, 2023, 10:29 am PST.
The AirJet has the potential to revolutionize how we build thin-and-light laptops or even computers in general.
AirJet
For a long time, PCs have been chasing the idea of "no moving parts" as a platonic ideal for efficiency and reliability. For just as long, active cooling has impeded this goal: for high-powered electronics, you can't beat a fan and moving air to cool stuff down. Or can you? Frore Systems' AirJet is a radical solid-state approach to active cooling, and Gordon has the scoop at CES 2023.
Frore's founder and CEO Seshu Madhavapeddy was kind enough to give PCWorld the low-down on this emerging tech, which has the potential to upend the way high-powered laptops are built. The "magic" of AirJet is a combination of exotic materials, geometry, and physics: the 2.8mm chip has cavities in the top full of vibrating membranes, which blast cool air across the heat spreader underneath, cooling down a CPU or other component. Despite the minuscule dimensions, the AirJet can send individual air particles whooshing over the heat spreader at up to 200 kilometres per hour.
It seems impossible, but the results speak for themselves. AirJet's CES demonstration showed its "Mini" chip pushing air across a conventional fan and pushing up a ping-pong ball in a way that can't be denied and can replicate the cooling power of old-fashioned spinning blades. The back pressure — the force created by the moving air — generated by the AirJet equals a fan more than ten times its size. The AirJet is also silent and potentially dust-proof.
The potential here is enormous. One of the most significant limiting factors in the performance of thin laptops is the thermal profile: you can't shove a desktop-class chip into a computer and run it at full performance without making it thick as a brick (or setting it on fire). But those limitations start to disappear with solid-state cooling at a fraction of the size of even the most advanced conventional active cooling systems.
According to Madhavapeddy, two 1-watt AirJet units can account for 5 watts of cooling. Thus, it could double the thermal limit of a fanless thin-and-light, from 10 watts to 20 watts, with one larger "Pro" unit. The system has been scaled up to 28 watts in a silent, fanless laptop. For integration with more conventional designs, the AirJet can also be installed with a vapour chamber to put it to the side of a processor instead of directly on top.
AirJet expects the first devices with its cooling systems to debut by the end of the year. Check out the video for the full technical breakdown. And for more looks at the future of PC tech, subscribe to PCWorld on YouTube!
Microsoft is giving Windows Copilot an upgrade with Power Automate, promising to banish tedious tasks thanks to AI
See the TechRadar article by Kristina Terech | published on 23 February 2024.
The new Copilot plug-in can automate Excel, PDFs, and files.
Copilot (Image credit: Microsoft)
Microsoft has revealed a new plug-in for Copilot, its artificial intelligence (AI) assistant, named Power Automate. It will enable users to (as the name suggests) automate repetitive and tedious tasks, such as creating and manipulating entries in Excel, handling PDFs, and file management.
This development is part of a more extensive Copilot update package that will add several new capabilities to the digital AI assistant.
Microsoft gives the following examples of tasks this new Copilot plug-in could automate:
— Write an email to my team wishing everyone a happy weekend.
— An Excel file lists the top 5 highest mountains in the world.
— Rename all PDF files in a folder to add the word final at the end.
— Move all Word documents to another folder.
— I need to split a PDF by the first page. Can you help?
Who can get the Power Automate plug-in, and how?
Currently, this plug-in is only available to some users with access to Windows 11 Preview Build 26058, available to Windows Insiders in the Canary and Dev Channels of the Windows Insider Program. The Windows Insider Program is a Microsoft-run community for Windows enthusiasts and professionals where users can get early access to upcoming versions of Windows, features, and more and provide feedback to Microsoft developers to improve these before a wider rollout.
Hopefully, the Power Automate plug-in for Copilot will prove a hit with testers — and if it is, we should hopefully see it rolled out to all Windows 11 users soon.
Per the blog post announcing the Copilot update, this is the first plug-in release, part of Microsoft's Power Platform, a comprehensive suite of tools designed to help users make their workflows more efficient and versatile — including Power Automate. To use this plug-in, you'll need to download Power Automate for Desktop from the Microsoft Store (or ensure you have the latest version of Power Automate).
There are multiple options for using Power Automate: the free plan is suitable for personal use or smaller projects, and premium plans offer packages with more advanced features. From what we can tell, the ability to enable the Power Automate plug-in for Copilot will be free for all users, but Microsoft might change this.
Once you've made sure you have the latest version of Power Automate downloaded, you'll also need to be signed into Copilot for Windows with a Microsoft Account. Then, you'll need to add the plug-in to Copilot To do this, you'll have to go to the Plug-in section in the Copilot app for Windows and turn on the Power Automate plug-in, which should now be visible. Once enabled, you should be able to ask it to perform a task like one of the above examples and see how Copilot copes for you.
Once you try the plug-in for yourself, if you have any thoughts about it, you can share them with Microsoft directly at powerautomate-ai@microsoft.com.
Hopefully, this is a sign of more to come.
The language Microsoft uses about the plug-in implies that it will see improvements in the future to enable it and, therefore, Copilot to carry out more tasks. Upgrades like this are steps in the right direction if they're as effective as they sound.
This could address one of people's biggest complaints about Copilot since it was launched. Microsoft presented it as a Swiss Army Knife-like digital assistant with all kinds of AI capabilities, and, at least for now, it's not anywhere near that. While we admire Microsoft's AI ambitions, the company did make big promises, and many users are growing impatient.
We'll have to continue to watch whether Copilot will live up to Microsoft's messaging or if it'll go the way of Microsoft's other digital assistants like Cortana and Clippy.
Fun Facts:
What Is OpenAI Sora, and Will It Change Video Forever?
See the How-To Geek article by SYDNEY BUTLER | PUBLISHED 23 February 2024.
Will AI video kill the movie star?
KEY TAKEAWAYS
OpenAI Sora creates highly realistic video clips from text prompts, showcasing a significant advancement in AI technology.
Sora's ability to simulate physics in videos accurately is a standout feature, but it still has some issues with interactions and object generation.
The availability of Sora to the public is still being determined, as it is currently being tested for safety and quality before a firm release date is set.
Waves, by Sora
The speed of AI development is heading towards a point beyond human comprehension. OpenAI's Sora text-to-video system is just the latest AI tech to shock the world into realizing things are happening sooner than expected.
What Is OpenAI Sora?
Like other generative AI tools such as DALL-E and MidJourney, Sora takes text prompts from you and converts them into a visual medium. However, unlike those AI image generators, Sora creates a video clip with motion, different camera angles, direction, and everything else you'd expect from a traditionally produced video.
Looking at the examples on the Sora website, the results are often indistinguishable from natural, professionally produced videos. Everything from high-end drone footage to multi-million dollar movie productions. Complete with AI-generated actors, special effects, and the works.
Sora is, of course, one of many technologies that can do this. Until now, the most visible leader in this area was RunwayML, who offered their services to the public for a fee. However, even under the best circumstances, Runway's videos are more akin to the early generations of MidJourney still images. There's no stability in the image, the physics doesn't make sense, and as I write this, the most extended clip length is 16 seconds.
In contrast, the best output that Sora has to show is perfectly stable, with physics that looks right (to our brains, at least), and clips can be up to a minute in length. The clips are entirely devoid of sound, but other AI systems can already generate music, sound effects, and speech. So, I have no doubt those tools could be integrated into a Sora workflow or at worst, traditional voiceover and Foley work.
It can't be overstated what an enormous leap Sora represents from nightmarish AI video footage from just a year before the Sora demo, such as the quite-disturbing AI Will Smith eating spaghetti. This is an even bigger shock to the system than when AI image generators went from a running joke to giving visual artists existential dread.
Sora will likely impact the entire video industry, from one-person stock footage makers to the level of Disney and Marvel mega-budget projects. Nothing will be untouched by this. This is especially true since Sora doesn't have to create whole-cloth things but can work on existing material, such as animating a still you've provided. This might be the actual start of the synthetic movie industry.
How Does Sora Work?
We're going to get under the hood of Sora as far as we can, but it's not possible to go into that much detail. First, because OpenAI is ironically not open about the inner workings of their technology. It's all proprietary, so the secret sauce that sets Sora apart from the competition is unknown to us in its precise details. Second, I'm not a computer scientist, and you're probably not a computer scientist, so we can only understand broadly how this technology works.
The good news is that there's an excellent (paywalled) Sora explainer by Mike Young on Medium, based on a technical report from OpenAI that he's broken down for us mere mortals to comprehend. While both documents are worth reading, we can extract the most important facts here.
Sora is built on the lessons companies like OpenAI have learned when creating technologies like ChatGPT or DALL-E. Sora innovates how it's trained on sample videos by breaking them into "patches", which are analogous to the "tokens" used by ChatGPT's training model. Because these tokens are all equal, things like clip length, aspect ratio, and resolution size don't matter to Sora.
Sora uses the same broad transformer approach that powers GPT and the diffusion method that AI image generators use. During training, it looks at noisy, partly diffused patch tokens from a video and tries to predict what the clean, noise-free token would look like. By comparing that to the ground truth, the model learns the "language" of video, which is why the examples from the Sora website look so authentic.
Apart from this remarkable ability, Sora also has highly detailed captions for the video frames it's trained on, which is a large part of why it can modify the videos it generates based on text prompts.
Sora's ability to accurately simulate physics in videos is an emergent feature, which results simply from being trained on millions of videos that contain motion based on real-world physics. Sora has excellent object permanence; even when objects leave the frame or are occluded by something else in the frame, they remain present and return unmolested.
However, there are sometimes issues when things in the video interact with causality and spontaneous object generation. Also, Sora seems to confuse left with right occasionally. Nonetheless, what's been shown so far is usable and state-of-the-art.
When Will You Get Sora?
So we're all extremely excited to get hands-on with Sora, and you can bet your bottom dollar I'll be playing with it and writing up exactly how good this technology is when we're not being shown hand-picked outputs, but how soon can this happen?
As of this writing, it's unclear exactly how long it will be before Sora is available to the general public or how much it will cost. OpenAI has stated that the technology is in the hands of the "red team", which is the group of people whose job is to try and make Sora do all the naughty things it's not supposed to and then help put guardrails up against that sort of thing happening when actual customers get to use it. This includes the potential to create misinformation, derogatory or offensive materials, and many more abuses one might imagine.
As of this writing, it's also in the hands of selected creators, which I suspect is both for testing purposes and to get some third-party reviews and endorsements out as we lead up to its final release.
The bottom line is we are still determining when it will be available. In the same way, you can pay for and use DALL-E 3; in reality, even OpenAI still needs to set a firm date. This is because if it's in the hands of safety testers, they might uncover issues that take longer to fix than expected, which will push back a public release.
The fact that OpenAI feels ready to show off Sora and even take a few curated public prompts through X (formerly Twitter) means that the company thinks the quality of the final product is pretty much ready. Still, until there's a better picture of public opinion, safety issues raised, and safety issues discovered, no one can say for sure. We're talking months rather than years, but don't expect it next week.
Deep Dive into the Greenshot Application
See the 6m20s YouTube video by "itskenagain-tech".
Greenshot
This free program does everything that Windows Snipping Tool does and more.
For screen-shots in Windows, I would typically use "Screenpresso". Over the years it has been simple and reliable.
Unfortunately, I tried it on an image that wanted a missing DLL to continue. It asked to download the run-time DLL (from Microsoft), so I wanted to install it.
At first, the DLL downloaded without any trouble. Then it wouldn't install. It was wrapped in a .zip file and didn't know what to execute to install the DLL. Screenpresso kept loudly DINGING to download and install the DLL, and all the while, trapping Screenpresso with no way of cancelling or exiting from the situation.
I finally had to END-TASK the program with TASK MANAGER. The next time I tried to use Screenpresso it repeated the performance.
I'd had enough. Good old Revo-Uninstaller to the rescue!
Then I heard from John Lucke about the fantastic free Greenshot program that worked wonders with editing and enhancing Windows screenshots.
Watch the video, and you, too, will be amazed.
Many thanks, again, to John Lucke.
PS: See The 15 Best Screenshot Tools for Windows by Shreelekha Singh | January 19, 2023.
[ Greenshot is their Number 5 out of 15 — Ed. ]
Greenshot is an open-source screenshot software available at no cost for Windows users. On the other hand, Mac users need to pay for the software.
The platform offers several key features like:
Capture the entire screen or a specific section.
Specialized capture for scrolling web pages from Internet Explorer.
Advanced annotation tools, including highlights, text, and obfuscation.
Even though Greenshot is a decent screenshot software, it's important to point out that the software has yet to have a stable release since August 2017. This prolonged period without updates could mean possible compatibility or security concerns for newer Windows versions.
Software from 2017
Meeting Location & Disclaimer
Bob Backstrom
~ Newsletter Editor ~
Information for Members and Visitors:
Link to — Sydney PC & Technology User Group
All Meetings, unless explicitly stated above, are held on the
1st Floor, Sydney Mechanics' School of Arts, 280 Pitt Street, Sydney.
Sydney PC & Technology User Group's FREE Newsletter — Subscribe — Unsubscribe
Go to Sydney PC & Technology User Group's — Events Calendar
Are you changing your email address? Would you please email your new address to — newsletter.sydneypc@gmail.com?
Disclaimer: We provide this Newsletter "As Is" without warranty of any kind.
The reader assumes the entire risk of accuracy and subsequent use of its contents.