
Search results for 'General' - Page: 1
| PC World - 25 Mar (PC World)Scams are increasing in frequency and scope. You’re probably tired of reading about them. I’m disappointed that I need to keep writing about them. Unfortunately, none of us will catch a break any time soon. Things are likely to get worse.
Typically, scams cast a wide net. A recent common topic has been unpaid parking tickets or tolls. However, these kinds of scams aren’t the type I’m most worried about. (I live in an urban area with good public transportation. What’s a car?) What’s far more concerning are personalized scams—ones that zero in on details of my life to trick me. And this evolved kind of scheme is set to take off as AI tools continue to improve.
It’s the number one warning that comes up when I’ve talked to cybersecurity experts in recent months. They’ve repeatedly told me that AI reduces the work of focusing on specific individuals, through better refinement and automation of their campaigns.
“[In 2025], there’s no doubt we’ll see increasingly more AI-driven attacks,” said Paige Schaffer, CEO of Iris, an identity protection service. “We already see plenty of AI-created phishing emails that look incredibly realistic, impersonating trusted individuals or companies.”
But AI also creates opportunities for fraudsters to target specific individuals instead of relying on more general tactics. “By analyzing large datasets, AI systems can help criminals identify psychological vulnerabilities (or certain individuals) more susceptible to these types of attacks and exploit their unique biases or predispositions,” said Schaffer.
Foundry
What kind of vulnerabilities? Consider this scenario presented by Abhishek Kamik, Head of Threat Research at McAfee, who says more personalized scams include those that play off of strong emotions like fear or desperation: “Imagine being one of the 36 percent of people who say they’ve gotten fake job offers, often for remote or urgent roles that seem too good to be true. If you’re not desperate for a job, you may pause and think twice before replying—but if you need to find a job to make ends meet, you may click and end up giving up personal information.”
More carefully crafted scams don’t need to be exotic or wildly detailed, either. They still involve tried and true ploys—think banking or credit card fraud—but just become far more convincing. “Thirty-seven percent of Americans have received fake alerts about supposed issues with their bank accounts or credit cards,” says Kamik. “To make things even trickier, two out of three people admit they’re not confident they could tell the difference between a voicemail created with AI and one from a real person.”
Yes, deepfake audio calls are a real thing now—and they can be spun into hyperpersonal scams. More than just impersonating the right bank when asking you to “verify” your account, such ruses go straight for the emotional jugular. They synthesize the voice of someone you love, then create audio begging for help with a desperate situation.
Further reading: How AI impersonators will wreck online security in 2025
Deepfake videos (as pictured above) aren’t the only thing that can be used for impersonation—deepfake audio is a thing too, and easier to produce.McAfee
Fortunately, despite this increased sophistication, you can still protect yourself. The first step is knowing that scammers want your money—and that they’ll go after it through phishing attacks that steal your login information, infostealers that record everything you do (including signing into financial accounts), and ransomware that lets them extort you, as well as duping you into paying fake bills and donating to nonexistent charities.
A second big protective measure is to always stop and consider the validity of the alerts you receive. If you receive a notification about a data breach and are advised to reset your password, a quick Google search can reveal if that was in the news. Your package is delayed? Your online account should show you the package status. Bank called saying your account has been frozen? Log in (or call the number on your statements) and verify.
This can sound like a lot of work, and it can be. I find it simpler to take a blanket approach with my wariness: I immediately go directly to the source. Data breach? I open a new tab, log in, and change my password. Package is late? I skim through my email to see if I’m actually expecting a shipment. Bank account got frozen? I call the number on my statements fired up with the power of a thousand suns.
Is this slightly more work than using the provided links in an email? A little. But it’s a lot less thinking for me on busy days.
Our favorite Antivirus
Norton 360 Deluxe
Read our review
As a backup, you can also look to security software. Independent antivirus software is getting its own AI shot in the arm. Companies like Norton, McAfee, Bitdefender, and others are using AI for better scam detection, including deepfakes. This level of defense isn’t bulletproof yet—not in the thorough way that antivirus software stops malware—but it’s ramping up steadily.
Fraudsters target the ordinary details of our lives, hoping no one pays close enough attention. And now that I know it’s easier for them to really nail their targets, I also know to be more vigilant. Yeah, you and I are small fries. But what we’ve got in the bank is big to us—and worth guarding. Read...Newslink ©2025 to PC World |  |
|  | | PC World - 22 Mar (PC World)The wait is almost over. You can get Lenovo’s curvy new Legion Go S gaming handheld right now if you’re okay with it running Windows. But if you’re like me, you’re eagerly waiting for the SteamOS variant, which will be the very first third-party gaming handheld to run Valve’s SteamOS. That one is now up for preorder at Best Buy.
The retailer is offering two models of the Legion Go S Powered by SteamOS, its full and unwieldy title. The $550 base model comes with 16GB of RAM, a 512GB storage drive, and an AMD Ryzen Z2 Go processor. For $750, you can bump that up to 32GB of RAM, a full 1TB of storage, and the older but more powerful Ryzen Z1 Extreme chip. According to the preview page, they’re scheduled for release on May 25th, at least in the United States. (I’ll point out that $550 is 10 percent more than the price we heard back at the announcement… but a lot has happened in the last couple of months.)
The Legion Go S ditches the blocky body and Switch-style removable controllers of the original Legion Go handheld, but keeps a lot of the other features intact. While the more ergonomic body is a definite improvement, it’s not enough to get over the general jankiness of running Windows on an 8-inch device, and that lower-power chip certainly doesn’t help with the software overhead. Being a poor deal compared to the Steam Deck sure doesn’t help. For more info, check out our review.
Before Lenovo confirmed it would ship the first non-Valve PC handheld to run SteamOS back at CES 2025, we’d heard that Valve was also looking at the Asus ROG Ally family. The most recent preview version of SteamOS mentions that it’s laying down the groundwork for that hardware expansion. But if you’re tired of waiting, you can always try rolling your own with some Linux-based alternatives. Read...Newslink ©2025 to PC World |  |
|  | | BBCWorld - 21 Mar (BBCWorld)The US attorney general said a wave of vandalism and arson attacks at Tesla dealerships is `domestic terrorism`. Read...Newslink ©2025 to BBCWorld |  |
|  | | PC World - 21 Mar (PC World)If you try out Intel’s AI Playground, which incorporates everything from AI art to an LLM chatbot to even text-to-video in a single app, you might think: Wow! OK! An all-in-one local AI app that does everything is worth trying out! And it is… except that it’s made for just a small slice of Intel’s own products.
Quite simply, no single AI app has emerged as the “Amazon” of AI, doing everything you’d want in a single service or site. You can use a tool like Adobe Photoshop or Firefly to perform sophisticated image generations and editing, but chatting is out. ChatGPT or Google Gemini can converse with you, even generating images, but to a limited extent.
Most of these services require you to hopscotch back and forth between sites, however, and can cost money for a subscription. Intel’s AI Playground merges all of these inside a single, well-organized app that runs locally (and entirely privately) on your PC and it’s all for free.
Should I let you in on the catch? I suppose I have to. AI Playground is a showcase for Intel’s Core Ultra processors, including its CPUs and GPUs–the Core Ultra 100 (Meteor Lake) and Core Ultra 200 (Lunar Lake) chips, specifically. But it could be so, so much better if everyone could use it.
Mark Hachman / Foundry
Yes, I realize that some users are quite suspicious of AI. (There are even AI-generated news stories!) Others, however, have found that certain tasks in their daily life such as business email can be handed off to ChatGPT. AI is a tool, even if it can be used in ways we disagree with.
What’s in AI Playground?
AI Playground has three main areas, all designated by tabs on the top of the screen:
Create: An AI image generator, which operates in either a default text-to-image mode, or in a “workflow” mode that uses a more sophisticated back end for higher-quality images
Enhance: Here, you can edit your images, either upscaling them or altering them through generative AI
Answer: A conventional AI chatbot, either as a standalone or with the ability to upload your own text documents
Each of those sections is what you might call self-sufficient, usable by itself. But in the upper right-hand corner is a settings or “gear” icon, which contains a terrific number of additional options, which are absolutely worth examining.
How to set up and install AI Playground
AI Playground’s strength is in its thoughtfulness, ease of use, and simplicity. If you’ve ever used a local AI application, you know that it can be rough. Some functions are content with just a command-line interface, which may require you to have a working knowledge of Python or GitHub. AI Playground was designed around the premise that it will take care of everything with just a single click. Documentation and explanations might be a little lacking in places, but AI Playground’s ease of use is unparalleled.
AI Playground can be downloaded from Intel’s AI Playground page. At press time, AI Playground was on version 2.2.1 beta.
AI Playground’s Setup is pretty easy. Just download what you want. If you choose not to, and need access later, the app will just prompt you to download it at a future time,Mark Hachman / Foundry
Note that the app and its back-end code require support for either a Core Ultra H (a “Meteor Lake” chip, the Core Ultra 200V) or either of the Intel Arc discrete GPUs, including the Alchemist and Battlemage parts. If you own a massive gaming laptop with a 14th-gen Intel Core chip or an Nvidia RTX 5090 GPU, you’re out of luck. Same with the Core Ultra 200H or “Arrow Lake.”
Since this is an “AI Playground,” you might think that the chip’s NPU would be used? Nope. All of these applications tap just the chip’s integrated GPU and I didn’t see the NPU being accessed once via Windows Task Manager.
Also, keep in mind that the GPU’s UMA frame buffer, the memory pool that’s shared between system memory and the integrated GPU, is what these AI models depend on. Intel’s integrated graphics shares half the available system memory with the system memory, as a unified memory architecture or UMA. Discrete GPUs have their own dedicated VRAM memory to pull from. The bottom line? You may not have enough video memory available to run every model.
Downloading the initial AI Playground application took about 680 megabytes on my machine. But that’s only the shell application. The models require an additional download, which will either be handled by the AI Installer application itself or may require you to click the “download” button itself.
The nice thing is that you don’t have to manage any of this. If AI Playground needs a model, it will tell you which one it requires and how much space on your hard drive it requires. None of the models I saw used more than 12GB of storage space and many much less. But if you want to try out a number of models, be prepared to download a couple dozen gigabytes or more.
Playing with AI Playground
I’ve called Fooocus the easiest way to generate AI art on your PC. For its time, it was! And it works with just about any GPU, too. But AI Playground may be even easier. The tab opens with just the space for a prompt and nothing else.
Like most AI art, the prompt defines the image and you can get really detailed. Here’s an example: “Award winning photo of a high speed purple sports car, hyper-realism, racing fast over wet track at night. The license plate number is “B580?, motion blur, expansive glowing cityscape, neon lights…”
The Settings gear in the upper right-hand corner opens up this options menu, with numerous tweaks. My advice is to experiment.Mark Hachman / Foundry
Enter a prompt and AI Playground will draw four small images, which appear in a vertical column to the left. Each image progresses in a series of steps with 20 as the default. After the image is completed, some small icons will appear next to it with additional options, including importing it into the “Enhance” tab.
The Settings gear is where you can begin tweaking your output. You can select from either “Standard” or “HD” resolution, which adjusts the “Image Size” field. You can adjust image size and resolution, and tweak the format. The “HD” option requires you to download a different model, as does the ‘Workflow” option to the upper right, which adds workflows based on ComfyUI. Essentially, they’re just better looking images with the option to guide the output with a reference image or other workflow.
Some of the models are trained on public figures and celebrities. But the quality falls to the level of “AI slop” in places.Mark Hachman / Foundry
For now, the default model can be adjusted via the “Manual” tab, which opens up two additional options. You’ll see a “negative prompt,” which excludes things that you put in, and a “Safe Check” to turn off gore and other disturbing images. By default, “NSFW” (Not Safe for Work) is added to the negative prompt.
Both the Safe Check and NSFW negative prompt only appear as options in the Default image generator and seem to be on by default elsewhere. It’s up to you whether or not to remove them. The Default model (Lykon/dreamshaper-8) has apparently been trained on nudity and celebrities, though I stuck to public figures for testing purposes.
Note that all of your AI-generated art stays local to your PC, though Intel (obviously) warns you not to use a person’s likeness without their permission.
There’s also a jaw-droppingly obvious bug that I can’t believe Intel didn’t catch. Creating an HD image often begins its images with “UPLOAD” projected over the image, and sometimes renders the final image with it on, too. Why? Because there’s a field to add a reference image and UPLOAD is right in the middle of it. Somehow, AI Playground used the UPLOAD font as part of the image.
Mark Hachman / Foundry
Though my test machine was a Core Ultra 258V (Lunar Lake) with 32GB of RAM, an 896×576 image took 29 seconds to generate, with 25 rendering steps on the Default Mode. Using the Workflow (Line2-Image-HD-Quality) model at 1280×832 resolution and 20 steps, one image took two minutes 12 seconds to render. There’s also a Fast mode which should lower the rendering time, though I didn’t really like the output quality.
If you find an image you like, you can use the Enhance tab to upscale it. (Upscaling is being added to the Windows Photos app, which will eventually be made available to Copilot+ PCs using Intel Core Ultra 200 chips, too.) You can also use “inpainting,” which allows you to re-generate a portion of the screen, and “outpainting,” the technique which was used to “expand” the boundaries of the Mona Lisa painting, for example. You can also ask AI to tweak the image itself, though I had problems trying to generate a satisfactory result.
The Enhance tab of Intel’s AI Playground, where you can upscale images and make adjustments. I’ve had more luck with inpainting and outpainting then tweaking the entire image with an image prompt.Mark Hachman / Foundry
The “Workflow” tab also hides some interesting utilities such as a “face swap” app and a way to “colorize” black-and-white photos. I was disappointed to see that a “text to video” model didn’t work, presumably because my PC was running on integrated graphics.
The “Answer” or chatbot portion of the AI Playground seems to be the weakest option. The default model, by Microsoft (Phi-3-mini-4K-instruct) refused to answer the dumb comic-book-nerd question, “Who would win in a fight, Wonder Woman or Iron Man?”
It’s not shown here, but you can turn on performance metrics to track how many tokens per second the model runs. There’s also a RAG option that can be used to upload documents, but it doesn’t work on the current release.Mark Hachman / Foundry
It continued.
“What is the best car for an old man? Sorry, I can’t help with that.”
“What’s better, celery or potatoes? I’m sorry, I can’t assist with that. As an AI, I don’t have personal preferences.”
And so on. Switching to a different model which used the OpenVINO programming language, though, helped. There, the OpenVINO/Phi-3.5-mini-instruct-int4 model took 1.21 seconds to generate a response token, producing tokens to the tune of about 20 tokens per second. (A token isn’t quite the length of a word, but it’s a good rule of thumb.) I was also able to do some “vibe coding” — generating code via AI without the faintest clue what you’re doing. By default, the output is just a few hundred tokens, but that can be adjusted via a slider.
You can also just import your own model, too, by dropping a GGUF file (the file format for inference engines) into the appropriate folder.
Adapt AI Playground to AMD and Nvidia, please!
For all that, I really like AI Playground. Some people are notably (justifiably?) skeptical of AI, especially how AI can make mistakes and replace the authentic output of human artists. I’m not here to argue either side.
What Intel has done, however, is create a surprisingly good general-purpose and enthusiast application for exploring AI, that receives frequent updates and seems to be consistently improving.
The best thing about AI Playground? It’s open source, meaning that someone could probably come up with a fork that allows for more GPUs and CPUs to be implemented. From what I can see, it just hasn’t happened yet. If it did, it could be the single unified local AI app I’ve been waiting for. Read...Newslink ©2025 to PC World |  |
|  | | sharechat.co.nz - 20 Mar (sharechat.co.nz) Capital is the lifeblood of prosperity, but New Zealand doesn’t have enough, says the managing director of a leading private financial services provider Read...Newslink ©2025 to sharechat.co.nz |  |
|  | | PC World - 20 Mar (PC World)Nvidia, AMD, and Intel have all latched onto AI-powered techniques as a way to enhance their graphics capabilities. Now Arm has entered the arena, shipping a new Arm Accuracy Super Resolution (Arm ASR) technology that’s based on something AMD previously developed.
Arm ASR was initially developed for mobile GPUs, not for PCs. But Arm showed off Arm ASR in a demonstration for Unreal 5, running its desktop renderer on a mobile platform. All told, Arm ASR sped up the rendering engine by 30 percent, suggesting that Arm’s customers—including Qualcomm with its Snapdragon PCs—could use the technology to eventually speed up PC graphics as well.
Arm ASR, which was first announced a year ago, is being released today as an Unreal Engine plugin. A Unity plugin will be available later this year. Arm said that it plans to expand Arm ASR to other platforms, without specifying exactly which ones or when.
Arm ASR is built upon AMD’s FidelityFX Super Resolution 2 (FSR 2), the older and simpler version that takes lower-resolution images and upscales them, boosting frame rates via faster processing. (AMD’s later iterations, including FSR 3 and FSR 4, also include frame generation.) Arm ASR uses temporal upscaling, however, said to be an improved version.
Arm showed off Arm ASR in its new demonstration video:
Arm said that game developers just need to enable the ASR plugin, configure the project settings to use Temporal Anti-Aliasing, and verify the integration. “Prominent game studios, including Enduring Games, Infold Games, and Sumo Digital, have integrated Arm ASR into their development processes, leading to improved game performance at the same visual quality,” Arm said.
At this point, it’s not clear whether or not licensees like Qualcomm will have access to Arm ASR, given the unexpected IP litigation that’s been brewing between the two. Last week, Qualcomm said that it had filed two additional briefs in its fight against Arm, which was largely settled in Qualcomm’s favor after Arm unexpectedly tried to cancel Qualcomm’s IP license. Those recent briefs ask the court to rule against Arm in an unresolved claim in the IP trial. The second motion supports Qualcomm’s separate attempt to sue Arm for breach of contract.
It is true, however, that the Windows on Arm platform in general has struggled to run games, largely because of compatibility issues. As Qualcomm and the Arm ecosystem continue to try and resolve that issue, Arm ASR will probably make gaming on Arm more attractive to developers and end customers alike. Read...Newslink ©2025 to PC World |  |
|  | | PC World - 18 Mar (PC World)When DeepSeek-R1 released back in January, it was incredibly hyped up. This reasoning model could be distilled down to work with smaller large language models (LLMs) on consumer-grade laptops. If you believed the headlines, you’d think it’s now possible to run AI models that are competitive with ChatGPT right on your toaster.
That just isn’t true, though. I tried running LLMs locally on a typical Windows laptop and the whole experience still kinda sucks. There are still a handful of problems that keep rearing their heads.
Problem #1: Small LLMs are stupid
Newer open LLMs often brag about big benchmark improvements, and that was certainly the case with DeepSeek-R1, which came close to OpenAI’s o1 in some benchmarks.
But the model you run on your Windows laptop isn’t the same one that’s scoring high marks. It’s a much smaller, more condensed model—and smaller versions of large language models aren’t very smart.
Just look at what happened when I asked DeepSeek-R1-Llama-8B how the chicken crossed the road:
Matt Smith / Foundry
This simple question—and the LLM’s rambling answer—shows how smaller models can easily go off the rails. They frequently fail to notice context or pick up on nuances that should seem obvious.
In fact, recent research suggests that less intelligent large language models with reasoning capabilities are prone to such faults. I recently wrote about the issue of overthinking in AI reasoning models and how they lead to increased computational costs.
I’ll admit that the chicken example is a silly one. How about we try a more practical task? Like coding a simple website in HTML. I created a fictional resume using Anthropic’s Claude 3.7 Sonnet, then asked Qwen2.5-7B-Instruct to create a HTML website based on the resume.
The results were far from great:
Matt Smith / Foundry
To be fair, it’s better than what I could create if you sat me down at a computer without an internet connection and asked me to code a similar website. Still, I don’t think most people would want to use this resume to represent themselves online.
A larger and smarter model, like Anthropic’s Claude 3.7 Sonnet, can generate a higher quality website. I could still criticize it, but my issues would be more nuanced and less to do with glaring flaws. Unlike Qwen’s output, I expect a lot of people would be happy using the website Claude created to represent themselves online.
And, for me, that’s not speculation. That’s actually what happened. Several months ago, I ditched WordPress and switched to a simple HTML website that was coded by Claude 3.5 Sonnet.
Problem #2: Local LLMs need lots of RAM
OpenAI’s CEO Sam Altman is constantly chin-wagging about the massive data center and infrastructure investments required to keep AI moving forward. He’s biased, of course, but he’s right about one thing: the largest and smartest large language models, like GPT-4, do require data center hardware with compute and memory far beyond that of even the most extravagant consumer PCs.
And it isn’t just limited to the best large language models. Even smaller and dumber models can still push a modern Windows laptop to its limits, with RAM often being the greatest limiter of performance.
Matt Smith / Foundry
The “size” of a large language model is measured by its parameters, where each parameter is a distinct variable used by the model to generate output. In general, more parameters mean smarter output—but those parameters need to be stored somewhere, so adding parameters to a model increases its storage and memory requirements.
Smaller LLMs with 7 or 8 billion parameters tend to weigh in at 4.5 to 5 GB. That’s not huge, but the entire model must be loaded into memory (i.e., RAM) and sit there for as long as the model is in use. That’s a big chunk of RAM to reserve for a single piece of software.
While it’s technically possible to run an AI model with 7 billion parameters on a laptop with 16GB of RAM, you’ll more realistically need 32GB (unless the LLM is the only piece of software you’ll have opened). Even the Surface Laptop 7 that I use to test local LLMs, which has 32GB of RAM, can run out of available memory if I have a video editing app or several dozen browser tabs open while the AI model is active.
Problem #3: Local LLMs are awfully slow
Configuring a Windows laptop with more RAM might seem like an easy (though expensive) solution to Problem #2. If you do that, however, you’ll run straight into another issue: modern Windows laptops lack the compute performance required by LLMs.
I experienced this problem with the HP Elitebook X G1a, a speedy laptop with an AMD Ryzen AI processor that includes capable integrated graphics and an integrated neural processing unit. It also has 64GB of RAM, so I was able to load Llama 3.3 with 70 billion parameters (which eats up about 40GB of memory).
The fictional resume HTML generation took 66.61 seconds to first token and an additional 196.7 seconds for the rest. That’s significantly slower than, say, ChatGPT.Matt Smith / Foundry
Yet even with that much memory, Llama 3.3-70B still wasn’t usable. Sure, I could technically load it, but it could only output 1.68 tokens per second. (It takes about 1 to 3 tokens per word in a text reply, so even a short reply can take a minute or more to generate.)
More powerful hardware could certainly help, but it’s not a simple solution. There’s currently no universal API that can run all LLMs on all hardware, so it’s often not possible to properly tap into all the compute resources available on a laptop.
Problem #4: LM Studio, Ollama, GPT4All are no match for ChatGPT
Everything I’ve complained about up to this point could theoretically be improved with hardware and APIs that make it easier for LLMs to utilize a laptop’s compute resources. But even if all that were to fall into place, you’d still have to wrestle with the unintuitive software.
By software, I mean the interface used to communicate with these LLMs. Many options exist, including LM Studio, Ollama, and GPT4All. They’re free and impressive—GPT4All is surprisingly easy—but they just aren’t as capable or easy-to-use as ChatGPT, Anthropic, and other leaders.
Managing and selecting local LLMs using LM Studio is far less intuitive than loading up a mainstream AI chatbot like ChatGPT, Copilot, or Claude.Matt Smith / Foundry
Plus, local LLMs are less likely to be multimodal, meaning most of them can’t work with images or audio. Most LLM interfaces support some form of RAG to let you “talk” with documents, but context windows tend to be small and document support is often limited. Local LLMs also lack the cutting-edge features of larger online-only LLMs, like OpenAI’s Advanced Voice Mode and Claude’s Artifacts.
I’m not trying to throw shade at local LLM software. The leading options are rather good, plus they’re free. But the honest truth is that it’s hard for free software to keep up with rich tech giants—and it shows.
Solutions are coming, but it’ll be a long time before they get here
The biggest problem of all is that there’s currently no way to solve any of the above problems.
RAM is going to be an issue for a while. As of this writing, the most powerful Windows laptops top out at 128GB of RAM. Meanwhile, Apple just released the M3 Ultra, which can support up to 512GB of unified memory (but you’ll pay at least $9,499 to snag it).
Compute performance faces bottlenecks, too. A laptop with an RTX 4090 (soon to be superseded by the RTX 5090) might look like the best option for running an LLM—and maybe it is—but you still have to load the LLM into the GPU’s memory. An RTX 5090 will offer 24GB of GDDR7 memory, which is relatively a lot but still limited and only able to support AI models up to around 32 billion parameters (like QwQ 32B).
Even if you ignore the hardware limitations, it’s unclear if software for running locally hosted LLMs will keep up with cloud-based subscription services. (Paid software for running local LLMs is a thing but, as far as I’m aware, only in the enterprise market.) For local LLMs to catch up with their cloud siblings, we’ll need software that’s easy to use and frequently updated with features close to what cloud services provide.
These problems will probably be fixed with time. But if you’re thinking about trying a local LLM on your laptop right now, don’t bother. It’s fun and novel but far from productive. I still recommend sticking with online-only models like GPT-4.5 and Claude 3.7 Sonnet for now.
Further reading: I paid $200/mo for ChatGPT Pro so you don’t have to Read...Newslink ©2025 to PC World |  |
|  | | PC World - 18 Mar (PC World)Some PC gamers use the terms frame rate and refresh rate interchangeably. But while they’re related, your gaming PC’s frame rate and refresh rate measure two very different things — one fixed, the other varies.
Frame rate explained
The frame rate, which is measured in frames per second (FPS), indicates the number of images displayed on the monitor per second. The higher the number of frames, the smoother the animation appears. In games, FPS determines how smoothly you see the animations and how quickly inputs are registered. A low FPS means that animations are not displayed correctly or are even skipped completely, which can lead to a stuttering display.
Radeon RX 9070 XT
Read our review
The performance of the graphics card (GPU) and the processor (CPU) mainly influences the FPS. The GPU does the main work in most games while the CPU plays a particularly important role in games with complex calculations such as physics or artificial intelligence. 30 FPS is considered acceptable for many games that do not rely on fast reactions. However, 60 FPS is the target for most games in order to guarantee a smooth experience. Higher FPS such as 120 or 144 offer advantages in competitive games in which every millisecond counts.
Refresh rate explained
The refresh rate, measured in Hertz (Hz), indicates how often the screen refreshes the image per second. The refresh rate depends on the display technology and the capabilities of the screen. Standard monitors offer a refresh rate of 60 Hz, which is sufficient for general use and casual gamers. However, gaming monitors can go up to 500 Hz.
MSI MPG 341CQPX
Read our review
Best Prices Today:
$749.99 at Amazon |
$849.99 at MSI
Although both the refresh rate and the frame rate are crucial for smooth displays, there are important differences. For example, the frame rate is mainly influenced by the GPU while the refresh rate depends solely on the monitor technology.
It’s crucial that the frame rate does not exceed the refresh rate of the monitor, as this can otherwise lead to image errors such as tearing. An imbalance can also lead to stuttering, which is when the image is displayed several times in succession. To achieve the best results, you should check the refresh rate of your monitor and adjust the frame rate in the game settings accordingly. For example, if your monitor has a refresh rate of 60 Hz, you should set the game to 60 FPS.
Tearing (image tearing) is a problem with asynchronous refresh rates and refresh rates. However, there are techniques to prevent this.
IDG
Technologies such as VSync, G-Sync, or FreeSync can help here. VSync synchronizes the FPS with the refresh rate to prevent tearing, but leads to a slight input delay. G-Sync and FreeSync flexibly adjust the refresh rate of the screen to the FPS to prevent tearing without causing a noticeable input delay. A balanced combination of refresh rate and frame rate is essential for a smooth gaming experience. Additional frames that your computer calculates but your monitor cannot display will only waste resources and increase the load on your device. A customized balance between frame rate and refresh rate not only ensures a smooth display, but also protects your system’s hardware. Read...Newslink ©2025 to PC World |  |
|  | | PC World - 17 Mar (PC World)Imagine being woken up late one Tuesday night by a phone call from your young relative. They’ve been in a car accident and urgently need money sent to their phone, not having their wallet on them. The connection is bad but it does sound like them. Still groggy and confused, you start making the transfer.
Only it’s not them calling you, they’re asleep, safe and sound. You’re talking to a robocall, steered by a scammer and made from a spoofed number. The scammer has cloned your relative’s voice by using their TikTok videos to train a so-called AI model. They’re sitting at a keyboard, guiding the conversation, probably from a country halfway around the globe.
Let’s take a closer look at what’s happening here. There are two threads that come together in these kinds of scams: the popularity of imposter calls (even as robocalls continue to decline) and the increasing availability of voice-cloning technology.
Imposter calls holding steady
According to an Incogni study, reports of unwanted calls in general, and robocalls in particular, had generally been on the decline from 2017 to 2023. Still, robocalls accounted for 55% of all reported unwanted calls in 2023, even though the ratio of robocalls to live calls was also in decline from 2021 (3.1 robocalls to every live call) to 2023 (1.6:1).
Drilling down into the topics covered during unwanted calls, the same study found that “imposter calls” held steady as being the most common type of call from 2019 to 2023, making up around a third of all reported calls in 2022 and 2023. Imposter calls were defined as “all unwanted calls where the caller impersonated someone else, an agency, or a company.”
To impersonate someone, a scammer would need not only their number and yours, but also some basic information like the person’s name, age and sex. To make a more elaborate imposter call convincing, they’d need a whole host of additional personal data, like ethnicity, hobbies, shopping habits, online activity, criminal and court records, even sexual preferences. This is exactly the kind of data a personal information removal service like Incogni removes from circulation, online and off.
Protect yourself from imposter scams with Incogni
There’s a significant proportion of unwanted calls that rely on impersonation. It’s reasonable to assume that a large number of these calls are scam calls, as it’s difficult to imagine a legitimate reason for a caller pretending to be someone else. What happens when new technologies make it easier for scammers to impersonate not only celebrities and politicians, but everyday people as well?
Voice-cloning technology enters the mix
Recent advances in “AI” technology have resulted in high-quality voice-generating and voice-cloning software being readily available, often for free. These technologies make the nightmare scenario of someone cloning the voice of a loved one and impersonating them on the phone possible.
Combined with number spoofing (making Caller ID display a number different from the one they’re calling from) and the availability of vast amounts of personal information online—including, for many people, voice-samples—these technologies can make for some extremely convincing impersonations.
Here’s how a criminal could execute such a scam:
Step 1: Target selection
If they’re going to go after you, it’s going to have to be worth their while. Scammers can:
Buy a ready-made list of people vulnerable to scams directly from a data broker,
Browse data broker records, looking for the perfect victim (like someone who’s older and has just sold a property, for example),
Buy or download breached or leaked data sets on the dark web,
Come across your social media profiles and decide to target you based on what you share there.
Ultimately, anything that suggests to a scammer that you both have something worth stealing and are sufficiently gullible is enough to make you a target.
Step 2: Background research
A scammer is going to need to know at least a few key things about you if they’re going to target you with a convincing scenario. These are some of the more common data points used in impersonation scams:
Full name,
Contact details, like phone number, email and address,
Employment history,
Educational background,
Financial situation,
Criminal history,
Relatives,
Known associates.
And, of course, they’ll need a similar set of data points on each of your relatives and associates, especially if they’re going to be impersonating one or more of them. Where can they find all this data, nicely packaged into detailed profiles?
Data brokers are companies that specialize in collecting, organizing, and monetizing personal information just like this. With trial memberships available for as little as $1, basically anyone can end up with detailed profiles on you and your close ones with just a few clicks. Personal information removal services like Incogni take these profiles down and request that data brokers stop collecting your data.
Remove yourself from the web with Incogni
Step 3: Collecting voice samples (optional)
If the scammer is planning on impersonating someone over the phone, they’ll need some recordings of that person speaking to give their “AI” software something to imitate. If you post videos of yourself on social media, have a YouTube channel or have appeared on a podcast, this won’t be a problem for them.
Step 4: Number spoofing (also optional)
Again, if the scammer is impersonating someone close to you, it’d be more convincing if the call appeared to be coming from that person’s number. There are several ways to achieve this at little-to-no cost to the scammer, although it might require some technical know-how.
They can’t spoof a number they don’t know, though, so having this kind of personal data purged from the internet can stop even these very technical attacks dead in their tracks.
Step 5: Execution
By now, the scammer now knows a lot about you, about the person they’re going to impersonate, and your shared network of friends, colleagues and relatives. They just need to choose the right time (often when you’re likely to be tired, in a rush or distracted) and make the call.
The relative simplicity of perpetrating a fraud like this goes some way to explaining why the FCC made the use of “AI-generated voices” in robocalls illegal in 2024. Of course, making something illegal only discourages law-abiding people from doing it—scammers are unlikely to take notice.
You might be feeling pretty safe at this point: maybe your loved ones don’t have any voice-recordings out there for the scammer to sample, maybe your phone or carrier has anti-spoofing measures in place, maybe you’re confident that you’d pick up on the fake voice, even if the scammer lowers the audio quality and adds background noise.
The fact is, if you’re an everyday person with not much of an online presence and not much in the way of money to lose, then it’s unlikely you’d be targeted with such an involved scam.
The more likely nightmare scenario
We started with a scenario in which a scammer clones the voice of someone close to you. But we also saw that the voice-cloning and number-spoofing steps are optional—how so? Well, if the scammer knows enough about you and your close one, they don’t need to impersonate them for the scam to work.
It’s late Tuesday night, you’re asleep when your phone wakes you up. You don’t recognize the number. You pick up. It’s a police officer, he says your young relative has been in a car accident and they’re in custody. Your relative asked the police officer to call you, they want to keep the situation under wraps until they can talk to their parents. In the meantime, they need you to bail them out.
Still groggy and confused, you start making the transfer.
In this scenario, the scammer doesn’t need to sound like anyone in particular, just a random police officer. The need for voice cloning goes away, as does the need for number spoofing. The scammer might still need to synthesize a voice, to cover up poor English skills or a suspiciously strong accent, for example, but that’s easy enough to do.
All they really need is to find your and your relative’s records on a data broker’s website.
What you can do
Staying off social media is always a good idea, but not always feasible. Also, not taking part in recorded interviews or presenting your ideas publicly just to avoid voice-cloning attacks seems like throwing the baby out with the bathwater.
Scammers can’t or at least aren’t likely to target you if they can’t find your information on data brokers’ websites in the first place. They also can’t easily figure out who your friends and family are (especially if you’ve set your social media profiles to “private”).
There’s a big difference between a “police officer” calling you to ask about your nephew Daniel Thomas Walsh, born on the 11th of March, 2006, who drives a blue Silverado and would have been on his way home from work at the pizza place, and the same “police officer” umming and ahhing as he can’t really give any details concerning Daniel other than his name.
Take scammers’ best tool away from them by having your personal information removed from data-broker databases. An automated personal information removal service like Incogni can make this an easy, set-and-forget process.
When choosing a data removal service, look for one that covers a wide range of data brokers, including marketing, recruitment, risk-mitigation, and people-search data brokers. Many services remove data only from people search sites, leaving users exposed.
Incogni covers all four of these data broker types, removing personal information from over 220 brokers in total. It also offers a family plan, so you can keep your and your nephew’s information private.
Get Incogni Read...Newslink ©2025 to PC World |  |
|  | | PC World - 17 Mar (PC World)Every online action leaves a digital footprint that influences your online identity. This footprint can have both positive and negative effects. To protect one’s online presence, it’s essential to understand what the digital footprint is and the potential consequences of leaving it online. This article will discuss these risks, provide tips on how to protect your information and highlight the importance of online security tools like Surfshark Antivirus to keep your personal data safe.
What is a digital footprint?
A digital footprint, electronic footprint, or digital shadow is the activities and traceable data you leave behind online. It can include things like logging into an app, browsing the web, posting on social media, etc. One simple use of digital footprint is advertising — advertisers can gather information about your interests and preferences from your digital footprint and show you targeted ads.
There are two types of digital footprints:
? Active digital footprint consists of the data you intentionally leave online, such as posts or comments you make on social media, newsletter subscriptions, and online purchases;
? Passive digital footprint consists of data that might be collected without you knowing. It simply depends on website cookies — they track your visits, IP (Internet Protocol) address, biometric and geolocation information, and more.
Let’s look at some numbers. According to Surfshark’s study on digital footprints, people use nine apps on average for 4 to 5 hours every day, mainly for social interactions. The average smartphone user can generate up to 188 digital footprints daily. Of course, developers use some data to improve your apps, so not all data is used for malicious purposes, but the numbers are still shocking.
So, understanding your digital footprint is important, but effectively managing it is crucial. Failing to do so can lead to serious consequences. Let’s explore the possible outcomes.
What are the consequences of leaving a digital footprint?
Your digital footprint can heavily influence how you access your accounts, impact your online reputation, and determine the advertisements you see online. It also increases the risk of being hacked.
For example, if a cybercriminal gets a hold of your digital footprint, they can impersonate you and trick the people close to you to give out your personal information. Or, if a service you use experiences a data breach, they could leak your sensitive data.
What’s even worse is that your digital footprint can be easily accessible. Data brokers, advertisers, mobile carriers, internet providers, co-workers, hackers, and other internet users can find it, too.
Given the potential risks, taking proactive measures to protect your digital footprint and safeguard your personal information online is crucial.
How can you protect your digital footprint?
To minimize your active digital footprint, limit the amount of data you share online, adjust your privacy settings, delete old accounts that are no longer in use, and avoid untrusted websites. As for protecting your passive digital footprint, you should consider more advanced security measures, such as using Surfshark Antivirus, which is part of Surfshark One and Surfshark One+ plans.
How can Surfshark Antivirus protect your digital footprint?
Surfshark Antivirus provides powerful device protection against malware and other cyberthreats. It can help safeguard your digital footprint by performing the following:
? Protecting devices from malware, like keyloggers, that secretly gather your data;
? Scanning for threats in real-time to prevent harmful software from accessing your data;
? Finding and removing adware — software that shows ads and collects user data.
Additionally, Surfshark Antivirus also offers the following:
? Real-time protection;
? Webcam protection;
? Fully customizable security;
? Prevention of online activity tracking by ad companies and bots.
Let’s say you’re already following the general security tips and using Surfshark Antivirus. What more can you do to safeguard yourself as much as possible? You can try other products that are included in the Surfshark One and Surfshark One+ plans. Let’s discover how they can help you protect your digital footprint.
View Surfshark antivirus
Additional security measures to protect your digital footprint
Surfshark stands out for its multifunctionality. Their Surfshark One package includes a full array of security products: Surfshark VPN, Surfshark Antivirus, Surfshark Alert, Surfshark Search, and Alternative ID. With Surfshark One+, you can also get a data removal service called Incogni.
To protect your active digital footprint, consider using the following products:
? Alternative ID: Protects your main email from spam and safeguards it from breaches;
? Surfshark Alert: Monitors your emails, credit cards, and personal identification number and provides immediate data leak alerts.
To protect your passive digital footprint, consider using the following products:
? Surfshark VPN: Enhances your online privacy by hiding your IP address and encrypting your connection and prevents tracking by websites, ISPs, and hackers;
? Surfshark Search: Delivers unbiased and ad-free search results by not logging queries or search history;
? Incogni: Requests data brokers to delete your data from their databases and monitors their compliance to prevent your data from being collected or sold again.
Using Surfshark Antivirus can be your first step toward taking proactive steps to safeguard your digital footprint. However, if you’re really concerned about your privacy, it’s worth looking into and using other bundled products.
Final thoughts on securing your digital footprint
Understanding your digital footprint is crucial for online security, but managing it is what really matters. For your active footprint, limit the information you share online, update your privacy settings, and perform a spring cleaning of your old accounts. For the passive footprint, consider advanced security measures like Surfshark Antivirus, which protects against malware, scans for threats in real-time, and removes adware. And be sure to take advantage of the full Surfshark bundle to access all the additional cybersecurity tools available to you.
View Surfshark antivirus Read...Newslink ©2025 to PC World |  |
|  |  |
|
 |
 | Top Stories |

RUGBY
Former All Black Mils Muliaina believes he knows how Super Rugby teams have figured out how to beat the Blues this season More...
|

BUSINESS
Confidence in the labour market is shrinking More...
|

|

 | Today's News |

 | News Search |
|
 |