Category: Technology

  • 50 years of Quirks & Quarks and half a century of science

    50 years of Quirks & Quarks and half a century of science

     

    Peculiarities and Quarks celebrates its 50th anniversary this week, looking back at half a century of science and looking ahead to the next 50 years.

    In 1975, when David Suzuki presented the first episode of Peculiarities and QuarksThe excitement of the Moon landings had subsided and North America faced a new challenge – the effects of the 1973 disaster. energy crisis.

    Oil supplies from the Middle East have been disrupted. The price of gasoline skyrocketed overnight, gas stations ran out of fuel, and those that had supplies had lines that stretched for blocks.

    The keywords at the time were fuel economy and energy efficiency, as scientists and engineers were tasked with stretching limited oil reserves to the maximum, while also looking for alternative energy sources.

    A collage of three men.
    Quirks & Quarks hosts over the years. From left, David Suzuki (1975-1979), Jay Ingram (1979-1991) and Bob McDonald (1992-present). (CBC)

    Consumers were switching from traditional, large, gas-guzzling North American cars to more efficient models, often from Japan and Europe. The U.S. auto industry followed suit with smaller, lighter, more aerodynamic vehicles powered by small engines capable of covering every possible mile on a gallon of gasoline.

    Research dollars have been invested in renewable energy, which has led to the development of alternatives such as clean hydrogen fuel, improved solar panels, wind turbines, geothermal energy and biofuels.

    Homeowners were encouraged to insulate their homes, and eventually high-efficiency heat pumps and furnaces became available, which reduced heating bills.

    At the same time, industries began to be held more responsible for air and water pollution. Car manufacturers have been forced to install catalytic converters in exhaust pipes to capture emissions, while large industries have been forced to clean up effluents and limit discharges of waste and harmful chemicals into rivers and lakes.

    The Earth seen in the distance against a black background, the surface of the Moon in the foreground.
    Taken aboard Apollo 8 by Bill Anders, this iconic image shows Earth peering beyond the lunar surface as the first manned spacecraft circumnavigated the Moon, with Anders, Frank Borman and Jim Lovell on board. (NASA)

    The 1970s were a time when science began to turn its focus to caring for planet Earth, thanks in part to photos of Earth taken from the Moon by Apollo astronauts and the influence of Rachel Carson’s groundbreaking 1962 book, Silent Spring. We have come to think of our planet as a small oasis of life in a vast and indifferent universe.

    In other words, science was increasingly focused on the environment. It highlighted problems such as carbon emissions, but also presented solutions such as green energy.

    During this period, a revolution began at home with the arrival of personal computers, which ended up changing the way we communicate, do business and connect with everyone in the world.

    In medicine, the Human Genome Project emerged in the 1990s to sequence all human DNA, leading to a deeper understanding of human biology and the evolution of life.

    WATCH | Diana Filer tells how she created the Quirks & Quarks program in 1975:

    This woman created the name Quirks & Quarks 50 years ago

    Diana Filer, the original producer of CBC Radio’s science program Quirks & Quarks, remembers releasing the first episode in 1975.

     

    So here we are, 50 years later, and a lot has changed. In the developed world, we are largely healthier and live longer than ever before. More food is being produced to feed a population that has doubled and we are surrounded by technology that provides information at our fingertips. We can travel almost anywhere on the globe.

    Science and technology have turned humans into a superspecies. We explore every domain on the planet and beyond. We discover our place in an unimaginably huge and expanding universe, investigate the mysteries of life down to the molecular level, and discover connections between all living things. Our knowledge has never been greater.

    But all this progress came at a cost. Scientists have repeatedly warned us that the enormous environmental stress on our planet is unsustainable. Our technological advances have often brought paradoxical results. For example, vehicles today are cleaner and more efficient. But many of them have grown back to enormous size – so efficiency has fueled consumption, rather than reducing it.

    An aerial photo of a traffic jam on a busy downtown Toronto street.
    Vehicles today are cleaner and more efficient than ever before, but they’re also larger on average, meaning we’re not seeing as big a drop in emissions as we could. (Patrick Morrell/CBC)

    The natural world is under extraordinary pressure. Scientists have calculated that species are disappearing at a rate not seen since the extinction of the dinosaurs. There are a range of causes, from human pollution and pesticides, to habitat loss due to human activity, to climate disruption.

    Every summer, more and more hectares of our forests catch fire. Water scarcity and droughts are becoming more common and impactful; stocked through warmer oceans, storms became stronger.

    Despite ambitious international agreements, we have not met our commitments to reduce greenhouse gas emissions. Climate concerns often seem to take a backseat to economic interests. And many researchers point to positive environmental policies that are being obstructed by those trying to maintain business as usual.

    The human impact on the planet has even been given a new geological name by some scientists: the Anthropoceneperiod defined by the permanent mark that we are making in the geological record.

    It may seem like we live in dark times, but there is hope.

    Wind turbines seen against a sunset
    Wind energy, as seen here in Pugwash, NS, is just one of the technologies we can turn to to meet our energy needs without burning up our planet, says Bob McDonald. (Cassie Williams/CBC)

    Many of the technological innovations that began to be developed in the 1970s to address oil shortages are now mature and realizing their enormous potential. The technologies to produce electricity from the free energy of the sun, the wind, the heat of the Earth or the energy of the atom are there for us.

    We know how to keep the lights on, the wheels turning, and the food on the plate without burning the planet. Science and technology, researchers consistently say, are there for us to use.

    However, it can be challenging to distinguish between real science and pseudoscience, between evidence-based research and self-appointed experts who claim that climate change is a hoax, that vaccines or painkillers cause autism, that the moon landings were faked, that the Earth is flat, and that alien civilizations live on Mars.

    However, this distinction is vital because the issues we face, such as climate change, energy production, food and drinking water supplies, all involve scientific principles. And a scientifically literate society can make intelligent decisions about how to proceed from here.

    Peculiarities and Quarks has been contributing in small ways to this scientific literacy for half a century, and I am extremely proud to have been a part of it.

    avots

  • Between excitement and hope: Seattle biotech leaders weigh in on AI’s real impact on drug development

    Between excitement and hope: Seattle biotech leaders weigh in on AI’s real impact on drug development

     

    Seattle Biotechnology and AI panel at a one-day conference on October 8, 2025. From left: Alex Federation, Talus Bioscience; Erik Procko, Cyrus Biotechnology; Marc Lajoie, Outpace Bio; Jamie Lazarovits, Archon Biosciences; and Chris Picardo, Madrona. (Photo by Life Sciences Washington)

    Dozens of Seattle biotech companies are using artificial intelligence to design new medical treatments. But at a conference of industry leaders and investors this week, scientists delivered a nuanced message: AI holds enormous promise, but expectations must remain grounded in reality.

    The trade association Life Sciences Washington and investment company Madrona hosted the one-day forum in downtown Seattle delving into biotechnology, pharmaceuticals and AI.

    “There is some hype about the breadth and depth of the capabilities of some of these AI models,” he said. Jamie LazarovitsCEO and co-founder of Archon Biosciences. Researchers need to “be very cautious” when drawing conclusions from the data generated by the models, he said.

    And there’s still a lot to be excited about.

    “What was science fiction 15 years ago is now reality,” said Erik Procko, chief scientist at the Cyrus Biotechnology. “So yes, there has been enthusiasm. But there is still enormous potential, and sometimes the progress being made is just dizzying.”

    Lazarovits and Procko were part of a panel that included four Seattle startups leveraging AI. Each company is facing different challenges in drug development:

    • Archona company that emerged from secrecy more than a year ago with $20 million in funding, is using AI to design proprietary protein structures, known as Antibody Cages or AbCs, that are intended to help antibodies bind to target cells and avoid other cells.
    • Ciro is developing medicines that focus on identifying and removing areas that will trigger an immune system response – a challenge called immunogenicity. The 10-year-old company has raised $36.6 million according to PitchBook.
    • Outpace Biographya startup founded in 2021 that raised $200 million is designing proteins that aim to boost T-cell therapies for solid tumors, which make up 90% of cancers. Tumors are disappointingly good at deflecting current T-cell treatments, which often stop working within a month.
    • Talus Bioscienceslaunched in 2020 and worth almost US$20 million, it targets transcription factors – proteins that are part of the “regulome” that turns genes on and off. The company targets transcription factors that activate genes that cause specific cancers.

    In addition to discussing their own work, the panelists identified key principles for how AI should — and shouldn’t — be used in biotech research:

    Increasing researchers, not replacing them

    Marc Lajoieco-founder and CEO of Outpace, compared the AI ​​tools to the robotic exoskeleton used by Ripley and others in the sci-fi film Aliens to carry heavy loads – and fight the ET Xenomorph Queen.

    “It makes the researcher better,” he said. “We are not trying to replace the researcher.”

    Models must meet reality

    Lazarovits noted that while AI can generate interesting clues and information, it doesn’t mean much until it’s tested in real experiments with cells and organisms.

    “Whenever we try to adopt new models, new AI methods, you can get incredibly excited about that silicon validation,” he said. “But the reality is: what do you actually validate in the wet lab?”

    The real bottleneck: clinical trials

    AI is great for developing new therapies, but the most expensive and labor-intensive part of the drug development process is seeing how they work in patients.

    “The most impactful place for AI to truly change the game in drug development would be to conduct smaller, better-powered clinical studies,” Lajoie said. The way to do this, he added, would be to find better drug candidates that perform multiple functions.

    Still searching for AI’s breakthrough moment

    Procko still hopes that AI will go further to spur advances in biotechnology and pharmacology.

    “AI is currently fantastic for predicting, say, a protein structure, but for creating new medicines it hasn’t yet found its killer application,” Procko said. “What does AI allow us to do now to produce new medicines that were previously simply impossible to manufacture? How could this be a game changer?”

    The question captures a central tension discussed on the panel and at the conference: While AI has transformed the way Seattle biotech companies approach drug design, the industry is still navigating the gap between computational promise and clinical proof.

    avots

  • Australian construction robot Charlotte can 3D print a 2,150-square-foot house in one day using sustainable materials

    Australian construction robot Charlotte can 3D print a 2,150-square-foot house in one day using sustainable materials

     

    NEWNow you can listen to Fox News articles!

    Construction robots are no longer a distant idea. They are already changing their workplace, facing repetitive, heavy and often dangerous tasks. The latest robot comes from Australia, where a spider-like machine named Charlotte is making headlines.

    Charlotte is designed to 3D print an entire 2,150 square foot home in just one day. This is equivalent to the speed of more than 100 bricklayers working simultaneously. This offers a glimpse into how the future of housing can be built.

    Sign up for my FREE CyberGuy report
    Receive my best tech tips, urgent security alerts, and exclusive offers straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – for free when you join my CYBERGUY.COM/NEWSLETTER

    AUSTRALIA DEBUTS FIRST MULTI-STORY 3D PRINTED HOME – BUILT IN JUST 5 MONTHS

    Robot 3D prints a house structure in a desert area with trees in the background.

    Charlotte the robot 3D prints houses in just 24 hours using eco-friendly materials. (Crest Robotics)

    How the Charlotte robot works

    Charlotte is a collaboration between Crest Robotics and Earthbuilt Technology. The robot is not limited to stacking bricks or tying rebar. Instead, it uses a giant extrusion system that lays down layers of environmentally friendly material.

    This material comes from sand, crushed brick and recycled glass, all locally sourced. The result? A fireproof, flood-proof structure created with a much smaller carbon footprint than traditional construction methods.

    3D PRINTED SUSTAINABLE HOME BUILT MAINLY FROM THE GROUND

    Why Charlotte the Robot Stands Out

    This 3D printing construction robot stands out for its unique combination of speed, strength, versatility and affordable price.

    • Speed: Print a house in 24 hours.
    • Strength: Uses durable and sustainable materials.
    • Versatility: It can stand on spider legs to continue building higher walls.
    • Accessibility: Eliminates many of the expensive construction steps.

    Although Charlotte is still in the development phase, a scaled-down prototype has already been presented. Researchers believe this could help solve housing shortages where labor is scarce and construction costs are skyrocketing.

    3D printing robot builds a structure next to a supply trailer in a desert.

    Its spider-like legs allow it to climb and build higher, reducing costs and saving time. (Crest Robotics)

    The Future of 3D Printed Moon Bases Outside Earth

    Charlotte’s creators are also keeping an eye on the stars. They envision future versions of the robot building lunar bases for research and exploration. With its compact design and autonomous operation, Charlotte could adapt to the extreme environments of space as well as the challenges on Earth.

    THE NEW ROBOT THAT COULD MAKE TASKS A THING OF THE PAST

    What does this mean for you

    If Charlotte lives up to its promise, it could reshape the way homes are built around the world. Faster construction means faster housing availability. Lower costs and sustainable materials mean more affordable homes with a lower environmental impact. For anyone facing rising housing prices or construction delays, technology like Charlotte can bring a ray of hope.

    Take my test: How safe is your online security?

    Do you think your devices and data are really protected? Take this quick test to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized analysis of what you’re doing right and what needs improvement. Take my test here: Cyberguy.com/Quiz

    AMERICA’S LESSONS FROM THE WORLD’S LARGEST 3D PRINTED SCHOOLS

    Robot 3D prints a structure on the surface of the Moon under a dark sky with a nearby support vehicle.

    Future versions could even build lunar bases for research and exploration. (Crest Robotics)

    Kurt’s Key Takeaways

    Charlotte may be years away from building its first large-scale home, but its prototype already points to a future where robots take on critical roles in construction. From solving housing crises on Earth to building shelters on the Moon, Charlotte shows how robotics and 3D printing can work together to solve real problems.

    CLICK HERE TO GET THE FOX NEWS APP

    Would you live in a house 3D printed by a robot like Charlotte, or even one built on the moon? Let us know by writing to us at Cyberguy.com/Contato

    Sign up for my FREE CyberGuy report
    Receive my best tech tips, urgent security alerts, and exclusive offers straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – for free when you join my CYBERGUY.COM/NEWSLETTER

    Copyright 2025 CyberGuy.com. All rights reserved.

     

    avots

  • iPhone 18 Pro Rumored to Have These 6 New Features

    iPhone 18 Pro Rumored to Have These 6 New Features

     

    While the iPhone 18 Pro and iPhone 18 Pro Max are still almost a year away, there are already rumors of some new features and changes for the devices.

    iPhone 17 Pro colors
    Below, we recap some of the early iPhone 18 Pro rumors so far.

    Minor Dynamic Island

    The standard iPhone 18, iPhone 18 Pro, and iPhone 18 Pro Max will be equipped with a slightly smaller Dynamic Island, but the devices will not feature Face ID under the screen, according to the Instant Digital Weibo Account.

    iPhone 14 Pro Dynamic IslandiPhone 14 Pro Dynamic Island
    There have been conflicting rumors about whether the iPhone 17 Pro models would have a smaller Dynamic Island, but its size has not changed. Now, the rumor is back on the table for the iPhone 18 series, and there’s a good chance it’s true this time, as it would be a stepping stone towards the rumored all-glass 20th Anniversary iPhone.

    Under-screen Face ID, however, is no longer expected until iPhone 19 Pro models or later.

    Instant Digital has about 1.5 million followers on Weibo, a popular Chinese social media platform. The account has accurately leaked information about future Apple products in the past, such as the iPhone 17 Pro models with a vapor chamber cooling system, the yellow finish for the iPhone 14 and iPhone 14 Plus, and the Apple Watch Ultra 2’s Titanium Milanese Loop. However, the account doesn’t have a perfect track record and some of the other iPhone 17 rumors it has shared in recent months have not panned out. achieved.

    ‘Translucent’ MagSafe area

    Overall, the iPhone 18 Pro models will feature a similar design to the iPhone 17 Pro models, according to digital chat stationa previously accurate leaker with more than three million followers on the Chinese social media platform Weibo.

    iPhone 17 Pro USB C portiPhone 17 Pro USB C port
    The leaker said the devices will have the same rear camera system design as the iPhone 17 Pro models, with a “plateau” housing three lenses in a triangular arrangement. They also expect the iPhone 18 Pro and iPhone 18 Pro Max to have the same 6.3-inch and 6.9-inch screen sizes used since the iPhone 16 Pro and iPhone 16 Pro Max.

    Notably, the leaker claimed that the Ceramic Shield area on the back of the iPhone 18 Pro models will feature a “slightly transparent design,” without giving further details.

    Perhaps this just means that the Ceramic Shield cutout on the back of the iPhone 18 Pro models will have a more matte appearance than on the iPhone 17 Pro models, but we’ll have to wait for additional rumors to surface for clarification.

    Variable Opening

    iPhone 17 Pro side profileiPhone 17 Pro side profile
    The main 48-megapixel Fusion camera on both iPhone 18 Pro models will offer variable aperture, according to supply chain analyst Ming-Chi Kuo.

    With variable aperture, users would be able to control the amount of light that passes through the camera lens and reaches the sensor. The main cameras on all models from iPhone 14 Pro to iPhone 17 Pro have a fixed aperture of ƒ/1.78, and the lens is always wide open and shooting at this wider aperture. With the iPhone 18 Pro models, users would be able to manually change the aperture, according to this rumor.

    A variable aperture on the iPhone 18 Pro models should give users greater control over depth of field, which refers to the sharpness of an object in the foreground compared to the background. However, given that iPhones have smaller image sensors due to size constraints, it’s unclear exactly how significant this improvement would be.

    None of the iPhone 17 Pro cameras offer variable aperture.

    Other rumors

    Read our iPhone 18 roundup for more information.

    avots

  • Together AI’s ATLAS adaptive speculator delivers 400% inference acceleration when learning from real-time workloads

    Together AI’s ATLAS adaptive speculator delivers 400% inference acceleration when learning from real-time workloads

    Companies that scale AI deployments are hitting an invisible performance wall. The culprit? Static speculators who can’t keep up with changing workloads.

    Speculators are smaller AI models that work alongside large language models during inference. They prepare several tokens in advance, which the main model checks in parallel. This technique (called speculative decoding) has become essential for companies trying to reduce inference costs and latency. Instead of generating one token at a time, the system can accept multiple tokens at once, drastically improving throughput.

    Together AI today announced research and a new system called ATLAS (AdapTive-LeArning Speculator System) that aims to help companies overcome the challenge of static speculators. The technique provides a self-learning inference optimization capability that can help deliver inference performance up to 400% faster than a base level of performance available in existing inference technologies such as vLLM.

    The company that had its beginning in 2023, has focused on optimizing inference on your enterprise AI platform. At the beginning of this year the company raised US$305 million as adoption and customer demand grew.

    “The companies we work with generally, as they grow, see changes in workloads and then don’t see as much acceleration in speculative execution as they once did,” Tri Dao, chief scientist at Together AI, told VentureBeat in an exclusive interview. “These speculators often don’t perform well when their workload domain starts to change.”

    The workload diversion problem no one talks about

    Most speculators in production today are “static” models. They are trained once on a fixed dataset representing expected workloads and then deployed without any adaptability. Companies like Meta and Mistral ship pre-trained speculators alongside their core models. Inference platforms like vLLM use these static speculators to increase throughput without changing the quality of the output.

    But there is a problem. When a company’s use of AI evolves, the static speculator’s accuracy plummets.

    “If you are a company that produces coding agents and most of your developers write in Python, suddenly some of them switch to writing Rust or C, then you see the speed starts to slow down,” Dao explained. “The speculator has a mismatch between what he was trained in and what the actual workload is.”

    This shift in workload represents a hidden tax on AI scaling. Companies either accept performance degradation or invest in recycling custom speculators. This process only captures a snapshot in time and quickly becomes outdated.

    How Adaptive Speculators Work: A Dual Model Approach

    ATLAS uses a dual speculator architecture that combines stability with adaptation:

    The static speculator – A heavy model trained on large data provides consistent baseline performance. Serves as a “speed floor.”

    The adaptive speculator – A lightweight model continuously learns from live traffic. It specializes in emerging domains and usage patterns.

    The conscious controller of trust – An orchestration layer dynamically chooses which speculator to use. Adjusts speculation “look ahead” based on trust scores.

    “Before the adaptive speculator learns anything, we still have the static speculator to help provide the speedup at the beginning,” Ben Athiwaratkun, AI scientist at Together AI, explained to VentureBeat. “When the adaptive speculator becomes more confident, the speed increases over time.”

    The technical innovation lies in balancing acceptance rate (how often the target model agrees on drafted tokens) and draft latency. As the adaptive model learns from traffic patterns, the controller relies more on the light speculator and extends the look-ahead. This increases performance gains.

    Users do not need to adjust any parameters. “On the user side, users do not need to turn any knobs,” Dao said. “For our part, we rotate these knobs for users to adjust to a setting that achieves good acceleration.”

    Performance that rivals custom silicon

    Together AI tests show that ATLAS achieves 500 tokens per second on DeepSeek-V3.1 when fully adapted. Most impressively, these numbers on Nvidia B200 GPUs match or exceed specialized inference chips like Groq’s custom hardware.

    “Software and algorithmic improvement are able to bridge the gap with truly specialized hardware,” Dao said. “We were seeing 500 tokens per second on these huge models that are even faster than some custom chips.”

    The 400% speedup the company claims for inference represents the cumulative effect of Together’s Turbo optimization suite. FP4 quantization provides 80% speedup over the FP8 baseline. Static Turbo Speculator adds another 80-100% gain. The adaptive system is at the top. Each optimization combines the benefits of the others.

    Compared to standard inference engines like vLLM or Nvidia’s TensorRT-LLM, the improvement is substantial. Together, the AI ​​compares the strongest baseline between the two for each workload before applying speculative optimizations.

    The memory-computation tradeoff explained

    Performance gains come from exploiting a fundamental inefficiency in modern inference: wasted computational power.

    Dao explained that normally during inference, much of the computational power is not fully utilized.

    “During inference, which is actually the dominant workload nowadays, you mainly use the memory subsystem,” he said.

    Speculative decoding trades idle computation for reduced memory access. When a model generates one token at a time, it becomes memory bound. The GPU is idle while waiting for memory. But when the speculator proposes five tokens and the target model checks them simultaneously, computation utilization increases while memory access remains approximately constant.

    “The total amount of computation to generate five tokens is the same, but you only needed to access the memory once instead of five times.” Dao said.

    Think of it as a smart cache for AI

    For infrastructure teams familiar with traditional database optimization, adaptive speculators work like an intelligent caching layer, but with a crucial difference.

    Traditional caching systems like Redis or memcached require exact matches. You store exactly the same query result and retrieve it when the specific query is executed again. Adaptive speculators work differently.

    “You can see this as a clever way of caching, not by caching exactly, but by discovering some patterns you see,” Dao explained. “Broadly speaking, we’re looking at you working with similar code, or working with similar code, you know, controlling computation in a similar way. We can then predict what the big model will say. We just get better and better at predicting it.”

    Instead of storing exact answers, the system learns patterns from how the model generates tokens. It recognizes that if you are editing Python files in a specific codebase, certain token sequences become more likely. The speculator adapts to these patterns, improving his predictions over time without needing identical data.

    Use cases: RL training and evolving workloads

    Two business scenarios particularly benefit from adaptive speculators:

    Reinforcement Learning Training: Static speculators quickly fall out of alignment as policy evolves during training. ATLAS continuously adapts to changes in policy distribution.

    Evolving workloads: As companies discover new AI use cases, the composition of the workload changes. “Maybe they started using AI for chatbots, but then they realized, hey, it can write code, so they started switching to code,” Dao said. “Or they realize that these AIs can actually call up tools and control computers and do accounting and things like that.”

    In a flutter session, the adaptive system can specialize for the specific codebase being edited. These are files not seen during training. This further increases acceptance rates and decoding speed.

    What this means for businesses and the inference ecosystem

    ATLAS is now available on Together AI’s dedicated endpoints as part of the platform at no additional cost. The company’s more than 800,000 developers (compared to 450,000 in February) have access to optimization.

    But the broader implications go beyond a supplier’s product. The shift from static to adaptive optimization represents a fundamental rethinking of how inference platforms should work. As companies deploy AI across domains, the industry will need to move beyond once-trained models toward systems that continually learn and improve.

    Together AI has historically released some of its research techniques as open source and collaborated on projects like vLLM. Although the fully integrated ATLAS system is proprietary, some of the underlying techniques may eventually influence the broader inference ecosystem.

    For companies looking to lead in AI, the message is clear: adaptive algorithms on commodity hardware can match custom silicon at a fraction of the cost. As this approach matures across the industry, software optimization increasingly outperforms specialized hardware.

    avots

  • Scientists just took a giant step towards expanding nuclear fusion: ‘What we’ve done here is the beginning of what is still a long journey’

    Scientists just took a giant step towards expanding nuclear fusion: ‘What we’ve done here is the beginning of what is still a long journey’

    A team of MIT researchers thinks they may have lowered one of the main barriers to achieving large-scale nuclear fusion—putting us one step closer to making an abundant form of energy a reality.

    By harnessing the same processes that power stars, we would have access to a clean, safe and virtually unlimited source of energy. Scientists have built reactors to try to tame fusion, one of the most explored being the tokamak. Essentially a donut-shaped tube that uses strong magnets to confine the plasma needed to power fusion reactions, the tokamak has shown great potential. But to fully realize this, scientists must first navigate the potential pitfalls this energy brings with it, including how to slow down a fusion reaction once it’s underway.

    That’s where the new search enter: Using a combination of physics and machine learning, researchers predicted how the plasma inside a tokamak reactor would behave given a set of initial conditions—something researchers have long puzzled over (after all, it’s difficult to look inside a fusion reactor mid-run). The article was published Monday in Nature Communications.

    “For fusion to be a useful energy source, it will have to be reliable,” said Allen Wang, lead author of the study and graduate student at MIT. MIT News. “To be reliable, we need to be good at managing our plasmas.”

    With great power comes great risk

    When a tokamak reactor is fully functioning, the plasma stream inside it can circulate at speeds of up to about 100 kilometers per second and at temperatures of 180 million degrees Fahrenheit (100 million degrees Celsius). That’s hotter than the Sun’s core.

    If the reactor has to be shut down for any reason, operators begin a process to “reduce” the plasma current, slowly de-energizing it. But this process is complicated and the plasma can cause “scratches and scars inside the tokamak – minor damage that still requires considerable time and resources to repair,” the researchers explained.

    “Uncontrolled plasma terminations, even during deceleration, can generate intense heat fluxes damaging the internal walls,” explained Wang. “Often, especially with high-performance plasmas, slowdowns can actually drive the plasma closer to some instability limits. So it’s a delicate balance.

    In fact, any misstep in the operation of fusion reactors can be costly. In an ideal world, researchers would be able to run tests on working tokamaks, but because fusion is not yet efficient, running one of these reactors is incredibly expensive, and most facilities will only use them a few times a year.

    Looking into the wisdom of physics

    For their model, the team found a delightfully clever method to overcome limitations in data collection – they simply went back to the fundamental rules of physics. They paired their model’s neural network with another model that describes plasma dynamics, and then trained the model with data from the TCV, a small experimental fusion device in Switzerland. The dataset included information about variations in the initial temperature and energy levels of the plasma, as well as during and at the end of each experimental run.

    From there, the team used an algorithm to generate “trajectories” that showed reactor operators how the plasma would likely behave as the reaction progressed. When they applied the algorithm to real TCV runs, they found that following the model’s “trajectory” instructions was perfectly capable of guiding operators to safely slow down the device.

    “We’ve done this several times,” Wang said. “And we did things much better across the board. So we had statistical confidence that we improved things.”

    “We are trying to address the scientific questions to make fusion routinely useful,” he added. “What we’ve done here is the beginning of a journey that’s still long. But I think we’ve made some good progress.”

    avots

  • Os dispositivos de microcorrente realmente funcionam? Dermatologistas dizem a verdade

    Os dispositivos de microcorrente realmente funcionam? Dermatologistas dizem a verdade

     

    Se você assiste regularmente a vídeos on-line sobre as rotinas de cuidados com a pele das pessoas, provavelmente já as viu usar dispositivos de microcorrentes. Supõe-se que esses dispositivos produzam colágeno, esculpem o rosto, melhorem a textura da pele e muito mais. Dispositivos domésticos de marcas populares como NuFace, ZIIP e Therabody podem custar centenas de dólares. Mas eles realmente funcionam? Para descobrir, pedimos aos dermatologistas mais informações sobre os prós e contras dos dispositivos de microcorrentes e como eles se comparam aos tratamentos profissionais.

     

    Os dispositivos de microcorrente funcionam?

    Dispositivos de microcorrentes domésticos não são baratos – custam centenas de dólares. Se você está pensando em investir em um, você deve estar se perguntando se eles na verdade trabalhar. O que dizem os especialistas?

    “Sim, os dispositivos de microcorrentes domésticos podem fornecer benefícios visíveis, embora geralmente sejam menos poderosos do que os tratamentos de nível profissional”, disse Hannah Kopelman, dermatologista da Kopelman Aesthetic Surgery. “Esses dispositivos fornecem correntes elétricas de baixo nível projetadas para estimular os músculos faciais e aumentar a circulação. Com o tempo, isso pode criar um efeito lifting temporário e proporcionar uma leve melhora no tom da pele.”

    Embora a eficácia dos dispositivos de microcorrentes domésticos não tenha sido exaustivamente testada, alguns estudos mostram que eles podem fornecer resultados reais. Em um Estudo de 202456 pessoas foram instruídas a usar o aparelho de microcorrente Slendertone Face e 52 pessoas foram colocadas em um grupo controle. Depois de usar o dispositivo Slendertone Face cinco dias por semana durante 12 semanas, os participantes relataram um tom de pele significativamente melhor, brilho e menos rugas em comparação com o grupo de controle.

    Mas antes de começar a usar um dispositivo de microcorrente doméstico, é importante definir expectativas realistas.

    “Os dispositivos de microcorrentes domésticos podem ser uma parte benéfica da sua rotina de cuidados com a pele, mas funcionam melhor para melhorias e manutenção leves, em vez de mudanças dramáticas”, disse Kopelman. “Para indivíduos que buscam resultados mais imediatos ou pronunciados, os tratamentos profissionais continuam sendo o padrão ouro”.

    Não perca nenhum de nossos conteúdos técnicos imparciais e análises baseadas em laboratório. Adicionar CNET como fonte preferencial do Google no Chrome.

     

    Três painéis mostrando meu rosto antes de usar o aparelho, imediatamente depois e cinco dias depois.

     

    Resultados da editora de bem-estar Anna Gragert usando o NuFACE TRINITY+, antes, durante e depois.

     

    Anna Gragert/CNET

    Benefícios do dispositivo de microcorrente

    Quando você usa um dispositivo de microcorrente doméstico de forma consistente, ele pode trazer uma ampla gama de benefícios para sua pele. “Os principais benefícios incluem leve lifting e firmeza da pele, melhora da circulação e melhora da drenagem linfática, o que pode reduzir o inchaço. Alguns usuários também relatam que sua pele parece mais renovada e radiante após o uso consistente”, disse Kopelman.

    No entanto, para rugas mais profundas e flacidez significativa, Kopelman disse que esses dispositivos provavelmente não terão o mesmo efeito que tratamentos profissionais ou procedimentos mais invasivos em consultório.

    Embora esses dispositivos domésticos possam ser eficazes, os resultados não são únicos. Robyn Gmyrek, dermatologista da UnionDerm, com sede em Nova York, “os benefícios dos dispositivos de microcorrentes domésticos variam de pessoa para pessoa com base na idade, estado de saúde e escolhas comportamentais, como exposição ao sol, tabagismo, dieta e dispositivo específico usado”.

    Como acontece com a maioria dos tratamentos e procedimentos de cuidados com a pele, você não deve esperar resultados imediatos. “Com dispositivos domésticos, consistência é tudo”, disse Gmyrek. “Eu recomendo usar um dispositivo de microcorrente diariamente, ou pelo menos três a cinco vezes por semana. Pense nisso como uma academia – se você não continuar a frequentar, perderá os benefícios.”

    Potenciais efeitos colaterais do dispositivo de microcorrente. Eles estão seguros?

    De modo geral, os dispositivos de microcorrente domésticos são seguros quando usados ​​conforme as instruções. E como as microcorrentes são pequenas, os tratamentos não devem ser dolorosos. Alguns efeitos colaterais são possíveis, no entanto.

    “Algumas pessoas podem sentir uma leve vermelhidão ou sensação de formigamento durante o uso, mas isso geralmente é temporário. No entanto, o uso inadequado – como aplicar pressão excessiva ou usar o dispositivo por mais tempo do que o recomendado – pode causar irritação na pele ou fadiga muscular”, disse Kopelman.

    No estudo de 2024 mencionado acima, apenas alguns participantes apresentaram vermelhidão leve na pele durante os tratamentos. Nenhum dos participantes teve quaisquer outras reações adversas, sugerindo que estes dispositivos são na sua maioria seguros.

    Embora existam dezenas de dispositivos domésticos que fornecem microcorrentes, nem todos são criados iguais. Cada dispositivo funciona de maneira diferente e tem vantagens e desvantagens exclusivas. Se você está procurando um dispositivo de microcorrente doméstico, há algumas coisas que você deve procurar, de acordo com Gmyrek. Ela recomenda comprar um aparelho com autorização do FDA, múltiplos níveis de intensidade e diversas funções, como a opção de uso de fototerapia LED. Você também deve procurar um dispositivo que venha com ou exija um gel condutor para transmitir adequadamente a microcorrente. Escolha um dispositivo de uma marca bem estabelecida com avaliações positivas de usuários e especialistas.

     

    O ZIIP HALO com seu Electric Complex Gel em uma bancada de banheiro branca.

     

    O ZIIP HALO com seu Gel Complexo Elétrico.

     

    Anna Gragert/CNET

    Como usar um dispositivo de microcorrente em casa

    Antes de usar um dispositivo de microcorrente doméstico, leia as instruções do fabricante. Cada dispositivo pode ser um pouco diferente, mas aqui está uma visão geral de como esses dispositivos devem ser usados:

    1. Lave o rosto: Você deve sempre começar com a pele limpa e seca antes de usar um dispositivo de microcorrente.
    2. Aplicar condutivo: A maioria dos dispositivos de microcorrente requer um gel condutor que permite que o dispositivo deslize sobre o rosto e ajuda a distribuir a corrente nas camadas mais profundas da pele.
    3. Selecione o nível de intensidade: Se o seu aparelho tiver múltiplas configurações de intensidade, selecione aquela que melhor se adapta à sua pele no momento do uso. Comece devagar e aumente gradualmente à medida que se acostumar com as diferentes configurações.
    4. Deslize o dispositivo sobre seu rosto: Usando uma leve pressão, mova suavemente o dispositivo pelo rosto em um movimento para cima e para fora. Você pode usar o dispositivo na linha do queixo, nas maçãs do rosto, na testa e nas laterais do pescoço (certifique-se de evitar a tireoide no centro).
    5. Remova o gel do rosto e do dispositivo: Quando terminar, lave o gel do rosto. Siga as instruções do fabricante para limpar o dispositivo – geralmente, você pode limpar o gel com um pano macio e limpo. Depois, você pode continuar com as próximas etapas de sua rotina de cuidados com a pele.
    6. Repita com base na recomendação do fabricante: A maioria dos dispositivos de microcorrente domésticos só deve ser usada cinco vezes por semana, durante 3 a 5 minutos, mas alguns dispositivos podem ser usados ​​diariamente. Verifique as instruções para ver com que frequência seu dispositivo deve ser usado para obter os melhores resultados.

    Os melhores dispositivos de microcorrente que testamos

    Para descobrir quais dispositivos de microcorrente são os melhores, a editora de bem-estar da CNET, Anna Gragert, testou seis dispositivos ao longo de dois meses. Com base no preço, modos, acessórios, recursos, autorização da FDA, instruções de limpeza, compatibilidade de aplicativos e gel condutor necessário, ela encontrou o NuFACE TRINITY+ ser o melhor dispositivo de microcorrente em geral.

    O NuFACE TRINITY+ custa US$ 395. Ele ajuda você a controlar o tempo com bipes sonoros, possui tutoriais úteis em seu aplicativo e é fácil de carregar com o suporte incluído.

    Se você está procurando um dispositivo com mais recursos, como massagem e terapia de luz LED, o preço de US$ 420 TheraFace Pro é recomendado. Este dispositivo também pode limpar o rosto. Os anéis quente e frio são vendidos separadamente, mas podem ser usados ​​com o dispositivo. A única desvantagem potencial é que os tutoriais do aplicativo são mais longos e seriam melhores com instruções de voz.

    Você pode exagerar com um dispositivo de microcorrente?

    Os dispositivos de microcorrente domésticos apresentam riscos e usá-los com muita frequência pode fazer mais mal do que bem. “O uso excessivo pode causar inflamação na pele, vermelhidão e inchaço”, disse Gmyrek. Se isso acontecer, você deve parar de usar o dispositivo imediatamente até que os efeitos colaterais desapareçam.

    “Usar um dispositivo de microcorrente doméstico com muita frequência também pode causar fadiga muscular, deixando os músculos faciais doloridos ou excessivamente tensos. Seguir o cronograma de uso recomendado pelo fabricante pode ajudar a evitar esse problema”, acrescentou Kopelman.

    Antes de começar a usar um aparelho de microcorrente doméstico, leia as instruções sobre a frequência de uso, que varia de acordo com o produto. Por exemplo, o Foreo Bear foi projetado para ser usado todos os dias. No entanto, o NuFace Trinity Plus e Varinha de microcorrente SkinGym deve ser usado cinco vezes por semana durante 60 dias e depois até três vezes por semana para manutenção.

    Não fique tentado a usar o dispositivo com mais frequência do que o recomendado. Os especialistas concordam que o uso excessivo não proporcionará melhores benefícios ou resultados mais rápidos. Além disso, você pode acabar danificando sua pele no processo.

    Quem não deve usar um dispositivo de microcorrente?

    Embora os dispositivos de microcorrente domésticos sejam normalmente seguros, nem todos são bons candidatos.

    “Indivíduos com certas condições médicas, como epilepsia, marca-passo ou outros dispositivos elétricos implantados, devem evitar o uso de dispositivos de microcorrentes, pois as correntes elétricas podem interferir no seu funcionamento”, disse Kopelman.

    Dispositivos de microcorrente também devem ser evitados durante a gravidez, a menos que sejam aprovados por um profissional de saúde.

     

    Pessoa de cabelo preto curto passando por procedimento de microcorrente realizado no rosto por profissional com blusa rosa claro.

     

    Tatsiana Volkava/Getty Images

    Dispositivos de microcorrente profissionais vs. domésticos

    A microcorrente é uma oferta popular em muitos spas médicos e clínicas de cuidados com a pele como tratamento independente ou como complemento de outros serviços. De acordo com especialistas, os tratamentos em consultório oferecem mais retorno para seus investimentos.

    “Os dispositivos profissionais de microcorrentes usados ​​em ambientes clínicos são muito mais potentes e podem proporcionar um efeito lifting mais significativo e duradouro em um período mais curto de tempo”, disse Kopelman.

    Além disso, os tratamentos profissionais podem ser mais bem personalizados de acordo com as suas necessidades, potencialmente proporcionando melhores resultados com mais rapidez.

    “Os profissionais licenciados também são treinados para ajustar as configurações de acordo com as necessidades da sua pele, o que torna o tratamento mais personalizado”, disse Kopelman. “Os dispositivos domésticos, por outro lado, são projetados para serem seguros para uso geral, por isso fornecem níveis de corrente mais baixos e exigem tratamentos mais frequentes para manter os resultados”.

     

    dicas-saúde.png

     

     

    Dispositivos de microcorrentes domésticos também não são baratos. Os dispositivos aprovados pela FDA podem custar entre US$ 150 e mais de US$ 400. A maioria dos dispositivos também exige um gel condutor, vendido separadamente.

    No entanto, os dispositivos domésticos tendem a ser um pouco mais baratos do que os procedimentos profissionais. Os tratamentos de microcorrentes no consultório geralmente custam entre US$ 250 e US$ 500 por sessão, mas isso depende de vários fatores, incluindo o tipo de tratamento, a duração do tratamento e sua localização.

    O resultado final

    Os dispositivos de microcorrentes caseiros podem ser um ótimo complemento à sua rotina de cuidados com a pele se você deseja melhorar a firmeza da pele, reduzir o inchaço e esculpir o rosto. Mas é importante ter expectativas realistas sobre os resultados. Embora os dispositivos domésticos funcionem, eles não são tão eficazes quanto os tratamentos profissionais.

    Se você está em dúvida sobre a compra de um dispositivo de microcorrente em casa, há algumas coisas que você pode considerar. Primeiro, pense nos objetivos da sua pele. Um dispositivo de microcorrente caseiro não elimina rugas profundas e não é uma alternativa ao Botox, preenchimentos dérmicos ou lasers cutâneos.

    Você também deve determinar com que frequência usará o dispositivo de forma realista. Aqui estão alguns conselhos de Gmyrek: “Seja honesto consigo mesmo – se você não vai usar um dispositivo doméstico de forma consistente, não se preocupe em gastar dinheiro com ele. Em vez disso, gaste esse dinheiro em tratamentos no consultório que são mais eficazes.”

    Os dermatologistas que contatamos disseram que os dispositivos de microcorrentes caseiros podem ser benéficos, mas funcionam melhor para melhorias leves. Se você está procurando resultados mais imediatos, considere tratamentos profissionais.

     

    Mostrar mais

    Quando usada conforme as instruções, a microcorrente é geralmente segura. No entanto, algumas pessoas podem sentir vermelhidão e formigamento leves e temporários durante o uso. Se usada incorretamente, a microcorrente pode causar fadiga muscular ou irritação na pele.

     

    Mostrar mais

    avots

  • I thought the Bose QuietComfort headphones had already reached their peak – so I tried the newer model

    I thought the Bose QuietComfort headphones had already reached their peak – so I tried the newer model

     

     

    qcu2-7.jpg

     

    Bose QuietComfort Ultra (Generation 2)

     

    Key findings from ZDNET

    • The Bose QuietComfort Ultra (Gen 2) headphones are available for $449 in five colors.
    • They solidify Bose’s assured confidence in its design, comfort, noise cancellation and sonic performance.
    • The only major and compelling updates are related to battery capacity and power management.

     

    Oct/2025

    Follow ZDNET: Add us as a preferred source on Google.


    How do you convince yourself to pay over $400 for a pair of headphones when they look and operate almost identically to the previous generation? That’s a question I hope to answer, and a question Bose hopes its second-generation flagship headphones will answer based on their performance alone.

    Also: Best headphones of 2025

    I spent two weeks working, traveling and resting in QuietComfort Ultra Earbuds (Gen 2)which I’ll call the QC Ultra 2, looking like Bose spent two years making them more “ultra” than their predecessor. With no major design updates, speaker drivers, or noise-canceling performance, I’ll have to dig deeper.

    It’s easy to position the Sony WH-1000XM6 as a direct competitor to the QC Ultra 2, but I wonder: is the QC Ultra 2 competing with its predecessors as much as other brands? Let’s find out.

    Same look, smarter details

    The defining theme of the QC Ultra 2 is that it doesn’t attempt to rewrite Bose’s legacy, but rather to organize it. They look identical to the first generation, except for the yokes, which swap the matte aluminum finish for shiny polished metal.

    The ear cups on the QC Ultra 2 are slightly shallower than those on its predecessor, which may cause fit issues for people with larger heads and ears. Otherwise, the QC Ultra 2’s look, feel, and fit don’t bring any notable changes, which isn’t necessarily a bad thing—if it ain’t broke, don’t fix it.

    Bose QC Ultra (left); Bose QC Ultra 2 (right)

    Bose QC Ultra (Gen 1) (left); Bose QC Ultra (Gen 2) (right).

    Jada Jones/ZDNET

    More significant updates are in the smaller details, including USB-C audio support, available at up to 16-bit/44.1kHz or 48kHz. Thus, the QC Ultra 2 is better suited for gaming or more faithful listening than the first generation. Unlike the Sonos Ace and Apple AirPods Max, the QC Ultra 2 retains its 3.5mm headphone jack. And unlike the Sony XM6, you can listen through the QC Ultra 2’s USB-C port while charging it.

    Also: 7 Smart iPhone USB-C Port Tricks Every User Should Know

    While USB-C audio support in 2025 seems more like an expectation than a new feature to celebrate, it rounds out the QC Ultra 2’s audio capabilities.

    Beauty is in the ears of the beholder

    The QC Ultra 2 has a great sound profile if you like exaggerated bass response, slightly dialed-back mids, and louder highs. It provides great reproduction of bass lines and center vocals on pop tracks such as One Direction’s “Stockholm Syndrome”, and ’90s rap like Craig Mack’s “Flava in Ya Ear.”

    Plus: Why I Keep Four Pairs of Headphones With Me at All Time (and the Unique Role Each Plays)

    On the other hand, the QC Ultra 2 isn’t as strong with layered ambient post-rock like Ben Howard’s “Time Is Dancing.” Songs with more subtle musical textures aren’t as easy to listen to, but turning down the bass helps. Overall, the tuning of the QC Ultra 2 is warmer and more spacious sounding, providing an extended and more accurate bass response than the first generation. Their sound should be fun for most people.

    Bose continues with its version of spatial audio, Immersive Audio, and introduces a new spatial adjustment for podcasts, TV shows, movies and other dialogue-heavy media. The feature works – you can hear your media expanding around your head. Personally, I’d prefer Bose to adopt Dolby Atmos support.

    Still the ANC gold standard

    Bose’s marketing conveys to me that noise cancellation is no longer considered a feature of a pair of headphones – it’s a lifestyle choice. Noise cancellation not only silences the world around you, but also helps create a private listening space when you’re out in public.

    Bose QuietComfort Ultra 2 in Driftwood Sand
    Jada Jones/ZDNET

    Bose follows this philosophy, as noise-canceling upgrades haven’t been at the forefront of the headphone launch. Despite little fanfare surrounding noise cancellation improvements, the QC Ultra 2 is slightly better at noise cancellation than its predecessor. Additionally, the QC Ultra 2’s ANC better covers high-pitched noises like keyboard clicks and low-pitched noises like the roar of an airplane engine than the first generation.

    When the headphones’ active noise cancellation (ANC) is turned on, even when no audio is playing, there is virtually no noise. This feat is highly impressive and comparable to Sony’s WH-1000XM6. The difference between Sony and Bose’s high-end ANC is negligible; you will have to find another category to help you choose one brand over another.

    Also: I tried the AI ​​noise cancellation of the Bose QuietComfort Ultra headphones and can’t go back to normal ANC

    Bose has also refined its AI-powered adaptive noise cancellation feature, ActiveSense. This feature maintains transparency mode and activates noise cancellation when the environment becomes too noisy. ActiveSense is my favorite feature on the QC Ultra Earbuds 2 and it works so well on the earbuds.

    The best feature is the most unexpected

    For me, the QC Ultra 2’s standout feature is related to its improved power management. In addition to increasing battery life from 24 hours in the first generation to 30 hours in the second, Bose also made its product’s power button obsolete.

    You can use the power button on the headphones to turn them on and off, but it’s not necessary. Instead, you can take the headphones off your head and place them horizontally – headphones up or down – and they will immediately disconnect from your devices, disable Bluetooth, and start saving power. Just pop them back in and they’re ready to use.

    Also: Bose took my favorite AirPods Max power feature — and did it better

    I love this feature because headphone on/off buttons are the bane of my existence. If you don’t press the button long enough, the earbuds won’t turn off, but if you press it long enough, they will enter pairing mode. You need to time the long press perfectly for the headphones to cooperate.

    The Bose’s startup was particularly confusing and faulty, causing the company to deliver the first-generation QC Ultra’s only firmware update to specifically address the issue. For me, not having to use the power button is a lifesaver.

    A well-made companion app

    Many headphone brands are notoriously known for their lackluster companion apps. Problematic features and boring user interfaces keep me away from them. Bose’s app is the best, offering a reliable, pleasant, and easy-to-use experience.

    Bose QuietComfort Ultra 2 in Driftwood Sand
    Jada Jones/ZDNET

    The app highlights the QC Ultra 2’s improved customization features, giving users the ability to completely disable the headphones’ noise cancellation and touch control strip, one of my least favorite features of the first-generation QC Ultra. You can now completely disable ANC, which was previously impossible with Bose headphones.

    The only issue I have with the Bose app is the limited equalizer. Instead of allowing users to adjust the headphones’ EQ by frequency bands, Bose only offers boosts and decreases for overall bass, mids, and treble, without specific, granular, quantifiable measurements.

    ZDNET Buying Advice

    THE Bose QC Ultra 2 offer subtle upgrades to the headphones’ noise cancellation, design, sound profile, power management, and user customization. They’re not a particularly exciting, headline-grabbing second-gen launch. Instead, Bose focuses on what it does best, offering smaller but significant refinements to address its few blind spots.

    If you already own the first-gen QC Ultra, hold on to them until the wheels fall off or wait for what Bose does next, which could be a major overhaul of the product. If you have the Bose NC700, QuietComfort 35, or 45, and they’re at the end of the road, the QC Ultra 2 would be a significant upgrade for you.

    Plus: Your Sony headphones have new tricks in a free update – but there’s a catch

    In short, between Sony and Bose, Bose offers a sleeker design, less cheap plastic build materials, a more relaxed fit, and USB-C audio. However, Bose’s sound profile leans heavily toward warmth and sweetened highs to account for strong ANC processing and long-term listening sessions, especially when traveling or working.

    The Sony’s sound profile is also warm, but its equalizer allows for more customization. THE WH-1000XM6 they have more accurate bass response, clearer mids, crisper highs, and a more spacious soundstage. Its sound is more analytical, but it can become tiring after a few hours of listening.

     

    We gave Bose’s latest headphones an Editors’ Choice award for their improvements over the previous generation’s shortcomings while refining their strengths. Overall, the QC Ultra 2 offers useful everyday features that its competitors don’t, particularly in power management, user customization, and USB-C audio support.

    Most importantly, Bose offers its most valuable and premium features to all users, regardless of device generation or software ecosystem.

     

    Show more

     

    avots

  • A IA armada pode desmontar patches em 72 horas – mas a defesa do kernel da Ivanti pode ajudar

    A IA armada pode desmontar patches em 72 horas – mas a defesa do kernel da Ivanti pode ajudar

    Adversários, desde gangues de crimes cibernéticos até esquadrões de ataques cibernéticos de estados-nação, estão ajustando a IA armada com o objetivo de derrotar novos patches em 3 dias ou menos.

    Quanto mais rápido o ataque, mais tempo para explorar a rede da vítima, exfiltrar dados, instalar ransomware ou configurar reconhecimento que durará meses ou anos. A correção manual tradicional agora é um risco, deixando as organizações indefesas contra ataques de IA armados

    “Os atores da ameaça são patches de engenharia reversa, e a velocidade com que fazem isso foi bastante aprimorada pela IA,” Mike Riemer, vice-presidente sênior do grupo de segurança de rede e CISO de campo da Ivanti disse VentureBeat em uma entrevista recente. “Eles são capazes de fazer engenharia reversa de um patch em 72 horas. Portanto, se eu lançar um patch e um cliente não o fizer dentro de 72 horas após o lançamento, ele estará aberto à exploração.”

    Isto não é especulação teórica. É a dura realidade que força os fornecedores a reestruturar completamente sua infraestrutura de segurança desde o kernel. Na semana passada, a Ivanti lançou o Connect Secure (ICS) versão 25.X, marcando o que Riemer chama “evidência tangível” do compromisso da empresa em enfrentar essa ameaça de frente.

    No DEF CON 33 pesquisadores de Lobo âmbar provou que esta ameaça é real, demonstrando desvios completos de autenticação em Zscaler, Netskopee Check Point explorando vulnerabilidades que existiam há meses, incluindo a falha do Zscaler em validar afirmações SAML (CVE-2025-54982), o acesso OrgKey sem credenciais da Netskope e as chaves SFTP codificadas da Check Point que expõem os logs dos inquilinos foram falhas deixadas em aberto e exploráveis ​​mais de 16 meses após a divulgação inicial.

    Por que a segurança do kernel é importante

    O kernel é o orquestrador central de tudo o que acontece em um dispositivo de computação, controlando memória, processos e hardware.

    Se um invasor comprometer o kernel, ele assumirá o controle total de um dispositivo que pode ser dimensionado para comprometer uma rede inteira. Qualquer outra camada de segurança ou aplicativo, plataforma ou proteção é imediatamente contornada e os invasores assumem o controle do kernel.

    Quase todos os sistemas operacionais dependem do conceito de aplicação de anéis de privilégio. Os aplicativos são executados em modo de usuário com acesso limitado. O kernel opera no modo kernel com controle total. Quando os adversários quebram essa barreira, eles obtêm acesso ao que muitos pesquisadores de segurança consideram o Santo Graal dos sistemas e das vulnerabilidades de redes inteiras.

    O novo lançamento da Ivanti aborda diretamente essa realidade. O Connect Secure 25.X é executado em um sistema operacional Oracle Linux de nível empresarial com forte aplicação do Security-Enhanced Linux (SELinux) que pode limitar as habilidades de um agente de ameaça dentro do sistema. A solução inclui proteção de inicialização segura, criptografia de disco, gerenciamento de chaves, redefinição de fábrica segura, um servidor web moderno e seguro e Web Application Firewall (WAF), todos projetados para proteger os principais aspectos do sistema e impedir significativamente ameaças externas.

    “No ano passado, avançámos significativamente a nossa estratégia Secure by Design, traduzindo o nosso compromisso em ações reais através de investimentos substanciais e de uma equipa de segurança alargada,” Riemer explicou. “Este lançamento é uma prova tangível do nosso compromisso. Ouvimos nossos clientes, investimos em tecnologia e talento e modernizamos a segurança do Ivanti Connect Secure para fornecer a resiliência e a tranquilidade que nossos clientes esperam e merecem.”

    Dos anéis de sistema operacional aos anéis de implantação: uma estratégia de defesa mais completa

    Embora os anéis do sistema operacional definam os níveis de privilégio, o gerenciamento moderno de patches adotou sua própria estratégia de anéis para combater a janela de exploração de 72 horas.

    A implantação do Ring fornece uma estratégia de patch automatizada e em fases que lança atualizações de forma incremental: um Anel de Teste para validação de TI principal, um Anel de Adotador Inicial para testes de compatibilidade e um Anel de Produção para implementação em toda a empresa.

    Esta abordagem aborda diretamente a crise da velocidade. A implantação do Ring atinge 99% de sucesso de patch em 24 horas para até 100.000 PCs, de acordo com uma pesquisa do Gartner. Instituto Ponemon pesquisas mostram que as organizações levam uma média alarmante de 43 dias para detectar ataques cibernéticos, mesmo após o lançamento de um patch.

    Jesse Miller, vice-presidente sênior e diretor de TI da Banco Estrela Sulenfatizou: “Ao julgar o quão impactante algo pode ser, você deve considerar tudo, desde eventos atuais, seu setor, seu ambiente e muito mais, na equação.” Sua equipe usa a implantação do anel para reduzir a superfície de ataque o mais rápido possível.

    Os invasores exploram agressivamente vulnerabilidades legadas, sendo que 76% das vulnerabilidades aproveitadas por ransomware foram relatadas entre 2010 e 2019. Quando o acesso ao kernel está em jogo, cada hora de atraso multiplica o risco exponencialmente.

    O Dilema do Kernel centra-se no equilíbrio entre segurança e estabilidade

    Na conferência FalCon da CrowdStrike, o diretor de inovação tecnológica Alex Ionescu expôs o problema: “Até agora, está claro que se você quiser se proteger contra malfeitores, você precisa operar no kernel. Mas, para fazer isso, a confiabilidade da sua máquina fica em risco.”

    A indústria está respondendo com mudanças fundamentais:

    O desvio de autenticação acontece quando os kernels estão comprometidos

    Lobo âmbar pesquisadores passaram sete meses analisando ZTNA produtos. Zscaler não foi possível validar SAML afirmações (CVE-2024-54982). Netskope a autenticação pode ser ignorada usando valores OrgKey não revogáveis. Ponto de verificação tinha codificado SFTP chaves (CVE-2025-3831).

    Essas vulnerabilidades existiam há meses. Alguns fornecedores corrigiram silenciosamente sem CVEs. Em agosto de 2025, 16 meses após a divulgação, muitas organizações ainda usavam configurações exploráveis.

    Lições aprendidas ao compactar 3 anos de segurança do kernel em 18 meses

    Quando invasores estatais exploraram o Ivanti Connect Secure em janeiro de 2024, validaram a decisão da Ivanti de avançar rapidamente sua estratégia de segurança em nível de kernel, comprimindo um projeto de três anos em apenas 18 meses. Como Riemer explicou, “Já havíamos concluído a primeira fase do projeto de fortalecimento do kernel antes do ataque. Isso nos permitiu dinamizar e acelerar rapidamente nosso roteiro.”

    As principais realizações incluíram:

    • Migração para Oracle Linux de 64 bits:

      A Ivanti substituiu um sistema operacional CentOS de 32 bits desatualizado pelo Oracle Linux 9, reduzindo significativamente as vulnerabilidades conhecidas vinculadas a componentes legados de código aberto.

    • Aplicação personalizada do SELinux:

      A implementação de políticas rígidas do SELinux quebrou inicialmente um número significativo de recursos do produto, exigindo uma refatoração cuidadosa sem comprometer os parâmetros de segurança. A solução resultante agora funciona em modo de aplicação permanente, explicou Riemer.

    • Desprivilegiação de processos e inicialização segura com TPM:

      A Ivanti eliminou privilégios de root de processos críticos e integrou inicialização segura baseada em TPM e criptografia RSA, garantindo verificações contínuas de integridade, alinhando-se com as recomendações e descobertas da pesquisa da AmberWolf.

    Houve também uma série de iniciativas independentes de testes de penetração, e cada uma confirmou zero comprometimentos bem-sucedidos, com os atores da ameaça normalmente abandonando as tentativas em três dias.

    Riemer explicou ao VentureBeat que os clientes da comunidade de inteligência global observaram ativamente os agentes de ameaças sondarem os sistemas reforçados. “Eles tentaram velho TTPsvoltado para explorações de servidores web. Eles praticamente desistiram depois de cerca de três dias,” Riemer disse.

    A decisão de ir para o nível do kernel não foi uma resposta de pânico. “Na verdade, tínhamos planos em 2023 para resolver isso antes mesmo de sermos atacados,” Riemer disse. A conversa que selou a decisão aconteceu em Washington, DC. “Sentei-me com o CIO de uma agência federal e perguntei-lhe abertamente: Será necessário que o governo dos EUA tenha uma solução VPN L3 local no futuro?” Riemer lembrou. “Sua resposta foi que sempre haveria uma necessidade de missão para uma solução do tipo VPN L3 no local, a fim de fornecer acesso de comunicação criptografada ao combatente.”

    O futuro além da segurança do kernel inclui eBPF e monitoramento comportamental

    Gartner Radar de impacto tecnológico emergente: relatório de segurança na nuvem taxas eBPF como tendo “alto” massa com 1-3 anos para adoção antecipada pela maioria. “O uso do eBPF permite maior visibilidade e segurança sem depender apenas de agentes no nível do kernel,” Notas do Gartner.

    A maioria dos fornecedores de segurança cibernética está investindo pesadamente em eBPF. “Hoje, quase toda a nossa base de clientes opera Sensor Falcão em cima de eBPF,” Ionescu disse durante sua palestra no Fal.Con deste ano. “Fizemos parte dessa jornada como Fundação eBPF membros.”

    Redes Palo Alto também emergiu como um importante player na segurança baseada em eBPF, investindo pesadamente na tecnologia para seu Córtex XDR e Nuvem Prisma plataformas. Essa mudança arquitetônica permite Redes Palo Alto para fornecer visibilidade profunda das chamadas do sistema, tráfego de rede e execução de processos, mantendo a confiabilidade do sistema.

    A convergência de CrowdStrike, Redes Palo Altoe outros grandes fornecedores de tecnologia eBPF sinalizam uma transformação fundamental – fornecendo a visibilidade que as equipes de segurança precisam sem riscos de falhas catastróficas.

    Estratégias defensivas que estão funcionando

    A aplicação de patches costuma ser relegada a uma daquelas tarefas que são procrastinadas porque muitas equipes de segurança estão com falta de pessoal e enfrentam escassez crônica de tempo. Estas são as condições em que os adversários contam quando escolhem as vítimas.

    É certo que se uma empresa não priorizar a segurança cibernética, ela demorará meses ou até anos para fazer os patches. É isso que os adversários procuram. Os padrões emergem de diferentes setores de vítimas e compartilham uma característica comum de procrastinar a manutenção do sistema em geral e os padrões de segurança especificamente.

    Com base em entrevistas com vítimas de violações que começaram com patches às vezes com anos de idade, a VentureBeat viu as seguintes medidas imediatas que eles tomam para reduzir a probabilidade de serem atingidos novamente:

    Automatize a aplicação de patches imediatamente. Os ciclos mensais estão obsoletos. Tony Miller, Ivanti’s Vice-presidente de serviços corporativos, a implantação de anel confirmada elimina o caos reativo de patches que deixa as organizações vulneráveis ​​durante a janela crítica de 72 horas.

    Audite a segurança em nível de kernel. Pergunte aos fornecedores sobre eBPF/FSE/WISP planos e cronogramas de migração.

    Defesas de camada. Esta é uma aposta para qualquer estratégia de segurança cibernética, mas é fundamental para acertar. “Se foi SELinux criação de perfil, prevenção de privilégios de root, um servidor web atualizado ou o WAF—cada camada interrompeu os ataques,” Riemer disse.

    Exija transparência. “Outro fornecedor foi atacado em novembro de 2023. Essa informação só foi disponibilizada em agosto de 2024,” Riemer revelou. “É por isso Ivanti tem sido tão público sobre transparência.”

    O resultado final

    A transformação no nível do kernel não é opcional. É sobrevivência quando a IA transforma vulnerabilidades em três dias.

    O Ivanti Connect Secure 25.X representa o que é possível quando um fornecedor se compromete totalmente com a segurança em nível de kernel, não como uma medida reativa, mas como um princípio arquitetônico fundamental. Gartner a suposição do planejamento estratégico é preocupante: “Até 2030, pelo menos 80% das empresas Windows os endpoints ainda dependerão de agentes híbridos de proteção de endpoints, aumentando a superfície de ataque e exigindo validação rigorosa.”

    As organizações devem fortalecer o que podem agora, automatizar imediatamente e preparar-se para mudanças arquitetônicas. Como Gartner enfatiza, combinando a implantação do anel com controles de compensação integrados, incluindo plataformas de proteção de endpoint, autenticação multifatore segmentação de rede como parte de uma abordagem mais ampla estrutura de confiança zero garante que as equipes de segurança possam reduzir as janelas de exposição.

    avots

  • This AI-Powered App Makes Lifelong Piano Learning Easy — and It’s 63% Off

    This AI-Powered App Makes Lifelong Piano Learning Easy — and It’s 63% Off

     

    DR: Learn piano for the rest of your life with Skoove Premium piano lessonsjust $109.97 (MSRP $299.99) while you can.


    Nowadays, you don’t need a classroom or a strict schedule to learn piano. Skoove Premium piano lessons allow you to learn your way, in the comfort of your own home. All you need is the Skoove app, a keyboard and a phone, tablet or laptop.

    Currently, you can secure a lifetime subscription to Skoove Premium piano lessons for just $109.97 (MSRP $299.99).

    Piano lessons that work around your schedule

    If you’ve always wanted to learn an instrument, Skoove Premium Piano Lessons lets you fit it into your busy schedule. There are no specific times or appointments to schedule – just pick up the keyboard, open the app and start playing. He is ready to teach both beginners and skilled pianists.

    Mashable Trends Report

    More than a million people are taking advantage of Skoove’s flexibility. There are over 400 lessons and thousands of instructional videos in the app, and thanks to the power of AI, Skoove can listen to your playing and recognize your notes. It provides real-time feedback, and if you run into any problems, there are music instructors available for further guidance.

    You learn piano using music you really like – from pop songs by Adele to classical songs by Beethoven. And monthly updates ensure you never run out of new music to play.

    Keep improving your piano skills for life with this Lifetime subscription to Skoove Premium piano lessons for just $109.97 (MSRP $299.99).

    StackSocial prices subject to change.

    avots