1. 0
    A highly sophisticated supply chain attack has struck one of the most widely used JavaScript libraries in the world. Two malicious versions of the popular HTTP client axios—1.14.1 and 0.30.4—were published to npm after a maintainer account was hijac...

    A highly sophisticated supply chain attack has struck one of the most widely used JavaScript libraries in the world. Two malicious versions of the popular HTTP client axios1.14.1 and 0.30.4—were published to npm after a maintainer account was hijacked, introducing a stealthy remote access trojan (RAT) affecting macOS, Windows, and Linux systems.

    With over 100 million weekly downloads, axios sits at the core of countless applications. This incident represents one of the most precise and operationally advanced attacks ever observed in the npm ecosystem.

    Immediate warning

    If you installed:

    Assume your system is compromised.

    How the attack worked

    1. Maintainer account hijack

    The attacker gained control of a primary axios maintainer’s npm account and used it to publish malicious versions directly to npm—bypassing the project’s secure GitHub Actions release pipeline.

    These releases:

    • Appeared legitimate (same maintainer identity)
    • Had no corresponding GitHub commits or tags
    • Were published using a stolen long-lived npm token (not secure OIDC publishing)

    2. Pre-staged malicious dependency

    The attacker quietly published a package named:

    [email protected]

    This package:

    • Mimicked a legitimate crypto library
    • Included no obvious malicious code in its main files
    • Contained a hidden postinstall script: “postinstall”: “node setup.js”

    This script executed automatically during installation—no import required.

    3. Dependency Injection into axios

    The compromised axios versions added a single new dependency:

    “plain-crypto-js”: “^4.2.1”

    That’s it.

    No other files changed. No malicious code inside axios itself.

    This is what made the attack so dangerous:

    • Static code inspection shows nothing suspicious
    • The payload executes during install, not runtime
    • The dependency is never actually used in the code

    The payload: Cross-platform RAT dropper

    Once triggered, the malicious dependency:

    1. Executes instantly during npm install
    2. Contacts a live command-and-control (C2) server
    3. Downloads a platform-specific second-stage payload
    4. Installs a remote access trojan
    5. Deletes itself and wipes evidence

    Read the entire article on the Hacking News post: https://www.hackingnews.com/malware/axios-compromi...

    1. 1
      Anthropic appears to have accidentally shipped a .map file with their Claude Code npm package, exposing the full readable source of the CLI tool. The package has since been pulled, but not before it was widely mirrored and dissected.A few interestin...

      Anthropic appears to have accidentally shipped a .map file with their Claude Code npm package, exposing the full readable source of the CLI tool. The package has since been pulled, but not before it was widely mirrored and dissected.

      A few interesting findings from the leaked code:

      • Anti-distillation mechanisms:
        The client can request “fake tool” injection to poison training data for anyone scraping API traffic. There’s also a system that summarizes intermediate outputs with signed blobs, so recorded traffic doesn’t contain full reasoning chains. Both mechanisms are feature-flagged and relatively easy to bypass.
      • “Undercover mode”:
        A built-in mode prevents the model from revealing internal codenames or even mentioning “Claude Code.” Notably, it cannot be force-disabled in external contexts, meaning AI-generated contributions may intentionally appear human-authored.
      • Frustration detection via regex:
        Instead of using an LLM, user frustration is detected with a simple regex matching phrases like “wtf,” “this sucks,” etc. Cheap and fast, if a bit ironic.
      • Client attestation at the transport layer:
        Requests include a placeholder that gets replaced with a hash by Bun’s native HTTP stack (in Zig), allowing the server to verify requests come from an official binary. This acts like lightweight DRM for API access, though it’s gated behind flags and not airtight.
      • Operational inefficiency:
        A comment notes ~250K API calls/day were being wasted due to repeated failures in a compaction routine. Fixed by adding a simple failure cap.
      • KAIROS (unreleased):
        The code references a heavily gated autonomous agent mode with background workers, scheduled tasks, memory distillation, and GitHub integration—suggesting an always-on agent system in development.
      1. 3
        In 2025, the battle against fraud has evolved — and artificial intelligence is at the center of it. AI has become both the fraudster’s most powerful weapon and the business’s strongest defense.According to Stripe’s latest report, The 2025 State of AI and...

        In 2025, the battle against fraud has evolved — and artificial intelligence is at the center of it. AI has become both the fraudster’s most powerful weapon and the business’s strongest defense.

        According to Stripe’s latest report, The 2025 State of AI and Fraud, malicious actors are using AI to create fake identities, steal card data, and automate large-scale testing of stolen payment credentials. At the same time, businesses are deploying AI-driven tools to detect and prevent fraud faster than ever before.

        This new arms race in digital security highlights a critical paradox: as Artificial Intelligence capabilities advance, so does the sophistication of fraud — but so does the power to stop it.

        Read the full article on Typing AI Biometrics blog: https://typing.ai/blog/how-artificial-intelligence...

        1. 2
          ChatGPT is seeing increased use for search-style queries, but Google continues to dominate global search activity.According to new data shared by SparkToro CEO Rand Fishkin, ChatGPT processes about 66 million search-intent prompts per day, while Google ha...

          ChatGPT is seeing increased use for search-style queries, but Google continues to dominate global search activity.

          According to new data shared by SparkToro CEO Rand Fishkin, ChatGPT processes about 66 million search-intent prompts per day, while Google handles approximately 14 billion daily searches. That makes Google’s search volume more than 210 times greater.

          Key figures:

          • OpenAI CEO Sam Altman previously reported that ChatGPT handled 1 billion prompts per day in December 2023, rising to 2.5 billion by July 2024.
          • A joint Harvard and OpenAI study estimated that 21.3% of prompts are “search-like”, leading to the current 66 million daily search-intent queries.
          • Google processed 5 trillion searches in 2024, or about 14 billion per day.
          • Estimates also suggest DuckDuckGo drives more referral traffic than ChatGPT.

          Industry data indicates that AI-driven search accounts for less than 1% of web referrals, according to BrightEdge. Earlier this year, Fishkin noted that Google Search was approximately 373 times larger than ChatGPT.

          Research from SparkToro and Datos suggests AI adoption does not reduce Google usage; in fact, users often increase their Google searches when adopting AI tools.

          Outlook:
          While AI-based discovery is expanding rapidly, the latest figures show that traditional search remains the dominant channel for online information and referral traffic.

          1. 3
            HiFiles AI continues to evolve as a secure, intelligent chatbot platform designed for document-based knowledge extraction and conversational support. The most recent updates reflect a commitment to flexibility, security, and enterprise-grade contr...

            HiFiles AI continues to evolve as a secure, intelligent chatbot platform designed for document-based knowledge extraction and conversational support. The most recent updates reflect a commitment to flexibility, security, and enterprise-grade control.

            I. New functionality now available

            1) Chat initialization without document upload
            Users can now initiate new chat sessions without uploading a document. This enables broader applications, such as setting up chatbot logic before final documentation is prepared, or running generic assistants that aren’t bound to a specific file.

            2) Custom domain and subdomain mapping
            Chatbots can now be deployed under a domain or subdomain of the user’s choice. This supports branding and integration use cases where direct embedding is insufficient or where DNS-based identity is preferred.

            A working example of this feature is visible at: https://iwashacked.com

            3) Access control based on user registration status
            HiFiles AI now supports access-level restrictions for bots based on whether an end-user is authenticated or anonymous. This capability is particularly valuable for gated resources, internal documentation, and any deployment where access segmentation is a compliance or policy requirement.

            Read the entire article on the Typing AI Biometrics blog: https://typing.ai/blog/hifiles-ai-july-feature-upd...

            1. 3
              Generative AI has transformed the way enterprises build, interact, and automate. But as adoption of large language models (LLMs) accelerates, so do the risks. From model manipulation to shadow usage, these systems introduce new and evolving threat v...

              Generative AI has transformed the way enterprises build, interact, and automate. But as adoption of large language models (LLMs) accelerates, so do the risks. From model manipulation to shadow usage, these systems introduce new and evolving threat vectors, many of which traditional security stacks fail to detect or mitigate effectively.

              At Typing AI Biometrics, we combine behavioral biometrics with advanced AI security tooling to help organizations secure every layer of generative AI. Below, we outline the top 10 risks facing LLM-powered apps in 2025 and how to reduce exposure while maintaining innovation velocity.

              1. Prompt Injection

              Risk: Attackers craft inputs designed to manipulate model behavior, override safety constraints, or leak context data. These inputs may come from users or third-party systems.


              Mitigation: Apply input validation and structure prompts using strict system/user separation. Monitor for anomalous patterns using behavioral AI. Deploy guardrails to detect and neutralize prompt manipulation before it reaches the model.

              2. Data leakage through outputs

              Risk: LLMs may expose internal data, personally identifiable information (PII), or confidential content unintentionally through completions.


              Mitigation: Implement output filtering and redaction. Use behavioral identity signals to limit access to context-sensitive operations. Enforce context lifespan policies and log access trails.

              3. Hallucinations

              Risk: LLMs often generate plausible but false information, leading to poor decisions or compliance breaches when outputs are trusted too readily.


              Mitigation: Integrate retrieval-augmented generation (RAG) with curated knowledge bases. Flag uncertain or unverifiable responses. Include human-in-the-loop (HITL) where business-critical accuracy is needed.

              Read the entire article on the Typing AI Biometrics blog: https://typing.ai/blog/2025-top-10-risks-and-mitig...

              1. 2
                As 2024 draws to a close, Fil Rouge Capital (FRC) is celebrating a year filled with progress, partnerships, and accomplishments. The venture capital firm has once again proven its commitment to fostering innovation and empowering the next generation...

                As 2024 draws to a close, Fil Rouge Capital (FRC) is celebrating a year filled with progress, partnerships, and accomplishments. The venture capital firm has once again proven its commitment to fostering innovation and empowering the next generation of startups.

                Here’s a look at what made this year remarkable for Fil Rouge Capital:

                Conversations That Spark Ideas

                📊 1207 Meetings with Founders

                FRC believes that great ideas start with meaningful conversations. This year’s founder meetings laid the foundation for exciting collaborations and groundbreaking ventures.

                Building Connections Across Ecosystems

                🗓️ 210 Events Attended

                From conferences to pitch nights, Fil Rouge Capital stayed deeply engaged in the global startup ecosystem. These events strengthened relationships with founders, investors, and industry leaders, fueling innovation.

                Uncovering Opportunities

                📄 864 Pitch Decks Reviewed

                The team diligently scouted for visionary founders and transformative ideas, ensuring their pipeline is brimming with potential game-changers.

                Celebrating Successes

                🚀 6 Successful Exits

                This year marked significant milestones for portfolio companies achieving successful exits. These accomplishments underscore FRC’s dedication to nurturing businesses toward sustainable growth.

                Exceptional Portfolio Performance

                📈 115% Revenue Growth (YoY)

                Visionary founders in the FRC portfolio delivered outstanding results, achieving an impressive year-over-year revenue increase.

                Read the entire article on: The Startup Network

                1. 2
                  Cybersecurity threats changing quickly. More people working remotely. Businesses need digital identity verification solutions more than ever to use cloud services, and making online transactions. These solutions help confirm customer identities and kee...

                  Cybersecurity threats changing quickly. More people working remotely. Businesses need digital identity verification solutions more than ever to use cloud services, and making online transactions. These solutions help confirm customer identities and keep sensitive information safe from unauthorized access.

                  New technologies like artificial intelligence, biometrics, and passwordless authentication are making security stronger. These advancements help businesses deal with ongoing cybersecurity challenges and build trust with their customers.

                  Importance of User Authentication

                  Good user authentication methods keep sensitive information safe and stop unauthorized access. It helps prevent identity theft and impersonation by verifying who users are. It ultimately helps in ensuring only the real owner can access their accounts.

                  Also, many industries have strict rules about data protection and privacy. Using strong authentication methods helps you follow these rules. It can keep you out of legal trouble and help you avoid fines.

                  Read the entire article on the Is it hacked blog: https://isithacked.com/blog/top-authentication-tre...

                  1. 3
                    OWASP is a non-profit foundation focused on web application security. It offers freely accessible resources like forums, tools, videos, and documentation on their website. Their notable projects include the OWASP Top 10. It highlights web app securi...

                    OWASP is a non-profit foundation focused on web application security. It offers freely accessible resources like forums, tools, videos, and documentation on their website. Their notable projects include the OWASP Top 10. It highlights web app security concerns. The OWASP API Security Top 10 identifies prevalent API security risks.

                    An Overview of Top 10 2024 OWASP API Security Risks

                    1) BOLA

                    Broken Object Level Authorization represents a critical vulnerability that comes from the failure to validate permissions of a user to execute a specific action on an object. It can potentially result in the unauthorized access, modification, or deletion of data.

                    According to OWASP this API security threat is widespread and exploitable. It is moderate in its business aspect and can be detected as well.

                    • It is essential to implement a robust authorization mechanism to mitigate this vulnerability.
                    • Developers should conduct thorough checks to validate actions of a user on individual records.

                    They should also perform comprehensive security tests prior to implementing any changes in a production environment. Organizations can significantly reduce the risk of BOLA vulnerabilities and safeguard sensitive data from unauthorized access and manipulation by following to these precautions.

                    2) Broken Authorization

                    This API security risk represents a significant security vulnerability that arises when an application's authentication endpoints are unable to identify attackers who are posing as someone else and subsequently grant them partial or complete access to the account.

                    It is crucial to have visibility and understanding of all potential authentication API endpoints to mitigate this vulnerability.


                    Read the entire article on the Typing AI Biometrics blog: https://typing.ai/blog/introduction-to-owasp-api-security-top-10-2024

                    1. 3
                      AI has been making significant strides in recent years, particularly in the field of generative AI. Generative AI refers to machines and algorithms that can create new content. This new content includes images, text, and even music based on patterns and d...

                      AI has been making significant strides in recent years, particularly in the field of generative AI. Generative AI refers to machines and algorithms that can create new content. This new content includes images, text, and even music based on patterns and data.

                      One of the latest trends in generative AI is the emergence of Vertical AI. It promises to revolutionize how AI is applied in specific industries.

                      #genai #AI #artificialintelligence #Biometrics #typingbiometrics #generativeai #authentication #MFA #2FA

                      https://typing.ai/blog/vertical-ai-the-next-revolution-in-generative-ai

                      109
                      threads
                      3
                      followers
                      About This Board
                      Technology is human knowledge which involves tools, materials and systems. The application of technology results in artifacts or products.