Technology

Explore top LinkedIn content from expert professionals.

  • View profile for Andrew Ng
    Andrew Ng Andrew Ng is an Influencer

    Founder of DeepLearning.AI; Managing General Partner of AI Fund; Exec Chairman of LandingAI

    2,234,896 followers

    AI’s ability to make tasks not just cheaper, but also faster, is underrated in its importance in creating business value. For the task of writing code, AI is a game-changer. It takes so much less effort — and is so much cheaper — to write software with AI assistance than without. But beyond reducing the cost of writing software, AI is shortening the time from idea to working prototype, and the ability to test ideas faster is changing how teams explore and invent. When you can test 20 ideas per month, it dramatically changes what you can do compared to testing 1 idea per month. This is a benefit that comes from AI-enabled speed rather than AI-enabled cost reduction. That AI-enabled automation can reduce costs is well understood. For example, providing automated customer service is cheaper than operating human-staffed call centers. Many businesses are more willing to invest in growth than just in cost savings; and, when a task becomes cheaper, some businesses will do a lot more of it, thus creating growth. But another recipe for growth is underrated: Making certain tasks much faster (whether or not they also become cheaper) can create significant new value. I see this pattern across more and more businesses. Consider the following scenarios: - If a lender can approve loans in minutes using AI, rather than days waiting for a human to review them, this creates more borrowing opportunities (and also lets the lender deploy its capital faster). Even if human-in-the-loop review is needed, using AI to get the most important information to the reviewer might speed things up. - If an academic institution gives homework feedback to students in minutes (via autograding) rather than days (via human grading), the rapid feedback facilitates better learning. - If an online seller can approve purchases faster, this can lead to more sales. For example, many platforms that accept online ad purchases have an approval process that can take hours or days; if approvals can be done faster, they can earn revenue faster. This also enables customers to test ideas faster. - If a company’s sales department can prioritize leads and respond to prospective customers in minutes or hours rather than days — closer to when the customers’ buying intent first led them to contact the company — sales representatives might close more deals. Likewise, a business that can respond more quickly to requests for proposals may win more deals. I’ve written previously about looking at the tasks a company does to explore where AI can help. Many teams already do this with an eye toward making tasks cheaper, either to save costs or to do those tasks many more times. If you’re doing this exercise, consider also whether AI can significantly speed up certain tasks. One place to examine is the sequence of tasks on the path to earning revenue. If some of the steps can be sped up, perhaps this can help revenue growth. [Edited for length; full text: https://lnkd.in/gBCc2FTn ]

  • View profile for Brij kishore Pandey
    Brij kishore Pandey Brij kishore Pandey is an Influencer

    AI Architect | Strategist | Generative AI | Agentic AI

    680,032 followers

    As technology becomes the backbone of modern business, understanding cybersecurity fundamentals has shifted from a specialized skill to a critical competency for all IT professionals. Here’s an overview of the critical areas IT professionals need to master:  Phishing Attacks   - What it is: Deceptive emails designed to trick users into sharing sensitive information or downloading malicious files.   - Why it matters: Phishing accounts for over 90% of cyberattacks globally.   - How to prevent it: Implement email filtering, educate users, and enforce multi-factor authentication (MFA).  Ransomware   - What it is: Malware that encrypts data and demands payment for its release.   - Why it matters: The average ransomware attack costs organizations millions in downtime and recovery.   - How to prevent it: Regular backups, endpoint protection, and a robust incident response plan.  Denial-of-Service (DoS) Attacks   - What it is: Overwhelming systems with traffic to disrupt service availability.   - Why it matters: DoS attacks can cripple mission-critical systems.   - How to prevent it: Use load balancers, rate limiting, and cloud-based mitigation solutions.  Man-in-the-Middle (MitM) Attacks   - What it is: Interception and manipulation of data between two parties.   - Why it matters: These attacks compromise data confidentiality and integrity.   - How to prevent it: Use end-to-end encryption and secure protocols like HTTPS.  SQL Injection   - What it is: Exploitation of database vulnerabilities to gain unauthorized access or manipulate data.   - Why it matters: It’s one of the most common web application vulnerabilities.   - How to prevent it: Validate input and use parameterized queries.  Cross-Site Scripting (XSS)   - What it is: Injection of malicious scripts into web applications to execute on users’ browsers.   - Why it matters: XSS compromises user sessions and data.   - How to prevent it: Sanitize user inputs and use content security policies (CSP).  Zero-Day Exploits   - What it is: Attacks that exploit unknown or unpatched vulnerabilities.   - Why it matters: These attacks are highly targeted and difficult to detect.   - How to prevent it: Regular patching and leveraging threat intelligence tools.  DNS Spoofing   - What it is: Manipulating DNS records to redirect users to malicious sites.   - Why it matters: It compromises user trust and security.   - How to prevent it: Use DNSSEC (Domain Name System Security Extensions) and monitor DNS traffic.  Why Mastering Cybersecurity Matters   - Risk Mitigation: Proactive knowledge minimizes exposure to threats.   - Organizational Resilience: Strong security measures ensure business continuity.   - Stakeholder Trust: Protecting digital assets fosters confidence among customers and partners.  The cybersecurity landscape evolves rapidly. Staying ahead requires regular training, and keeping pace with the latest trends and technologies.  

  • View profile for Saanya Ojha
    Saanya Ojha Saanya Ojha is an Influencer

    Partner at Bain Capital Ventures

    64,579 followers

    Time to dust off the ��OpenAI killed my startup” t-shirts. OpenAI just put on its big boy pants and entered the enterprise - deliberately this time, not just by osmosis from consumer demand. Announced today: 🎙️ Record mode - Audio-only meeting capture, smart summaries, action items 📂 Connectors - Access Google Drive, SharePoint, Box, Dropbox, OneDrive from inside ChatGPT 🔍 Deep Research - Pull from HubSpot, Linear, and internal tools via MCP 📄 Canvas - Turn meetings into documents, tasks, and execution flows OpenAI now has 3 million paying business users, up from 2M just three months ago. That’s 1M net new in a quarter. They're signing 9 new enterprises a week. The vision is simple: Stop toggling tabs. ChatGPT doesn't want to be a tool you switch to, but a surface you operate from. Why this matters: ▪️ Integrations with cloud drives and CRMs mean it’s now context-aware within your business’s actual knowledge stack - not just the public web. ▪️ Model Context Protocol support is one of the most important moves - it allows companies to feed ChatGPT real-time context from custom tools, which could unlock vertical-specific agents (e.g., biotech, legal, sales) ▪️Connectors and MCP support create a moat. Once a company connects its internal data sources and builds workflows atop ChatGPT, switching costs rise sharply. ▪️ Although Microsoft is a key OpenAI partner, Copilot and ChatGPT are starting to collide. Features like transcription, research, and action items overlap with Copilot for M365. This announcement marks another step in our relentless march toward agentic AI, systems that don’t just assist, but observe, reason, and act within real workflows. The battle for the AI-first enterprise stack is officially on. The usual suspects - Google, Anthropic, Microsoft are obviously in the ring but so are Notion, ClickUp, Zoom - all hoping to crack AI-powered productivity. The trillion-dollar question is this: Can a model provider ultimately become the place where work happens, or just the thing that helps it along?

  • AI is not failing because of bad ideas; it’s "failing" at enterprise scale because of two big gaps: 👉 Workforce Preparation 👉 Data Security for AI While I speak globally on both topics in depth, today I want to educate us on what it takes to secure data for AI—because 70–82% of AI projects pause or get cancelled at POC/MVP stage (source: #Gartner, #MIT). Why? One of the biggest reasons is a lack of readiness at the data layer. So let’s make it simple - there are 7 phases to securing data for AI—and each phase has direct business risk if ignored. 🔹 Phase 1: Data Sourcing Security - Validating the origin, ownership, and licensing rights of all ingested data. Why It Matters: You can’t build scalable AI with data you don’t own or can’t trace. 🔹 Phase 2: Data Infrastructure Security - Ensuring data warehouses, lakes, and pipelines that support your AI models are hardened and access-controlled. Why It Matters: Unsecured data environments are easy targets for bad actors making you exposed to data breaches, IP theft, and model poisoning. 🔹 Phase 3: Data In-Transit Security - Protecting data as it moves across internal or external systems, especially between cloud, APIs, and vendors. Why It Matters: Intercepted training data = compromised models. Think of it as shipping cash across town in an armored truck—or on a bicycle—your choice. 🔹 Phase 4: API Security for Foundational Models - Safeguarding the APIs you use to connect with LLMs and third-party GenAI platforms (OpenAI, Anthropic, etc.). Why It Matters: Unmonitored API calls can leak sensitive data into public models or expose internal IP. This isn’t just tech debt. It’s reputational and regulatory risk. 🔹 Phase 5: Foundational Model Protection - Defending your proprietary models and fine-tunes from external inference, theft, or malicious querying. Why It Matters: Prompt injection attacks are real. And your enterprise-trained model? It’s a business asset. You lock your office at night—do the same with your models. 🔹 Phase 6: Incident Response for AI Data Breaches - Having predefined protocols for breaches, hallucinations, or AI-generated harm—who’s notified, who investigates, how damage is mitigated. Why It Matters: AI-related incidents are happening. Legal needs response plans. Cyber needs escalation tiers. 🔹 Phase 7: CI/CD for Models (with Security Hooks) - Continuous integration and delivery pipelines for models, embedded with testing, governance, and version-control protocols. Why It Matter: Shipping models like software means risk comes faster—and so must detection. Governance must be baked into every deployment sprint. Want your AI strategy to succeed past MVP? Focus and lock down the data. #AI #DataSecurity #AILeadership #Cybersecurity #FutureOfWork #ResponsibleAI #SolRashidi #Data #Leadership

  • View profile for Jeff Winter
    Jeff Winter Jeff Winter is an Influencer

    Industry 4.0 & Digital Transformation Enthusiast | Business Strategist | Avid Storyteller | Tech Geek | Public Speaker

    164,182 followers

    What if the real disruption in manufacturing isn’t coming from AI, cloud, or automation... but from the uncomfortable realization that we’ve been investing in all the wrong things? According to Deloitte’s 2025 Smart Manufacturing Survey, manufacturers are pouring billions into tech. 𝟕𝟖% are allocating over 𝟐𝟎% of their improvement budgets to smart manufacturing. 𝟒𝟔% are prioritizing process automation. The intent is clear. The excitement is real. But…. I would argue 𝐰𝐞’𝐫𝐞 𝐬𝐭𝐢𝐥𝐥 𝐧𝐨𝐭 𝐫𝐞𝐚𝐝𝐲.. Not in our culture. Not in our org structures. Not in how we prepare our people. The data exposes the gap. Human capital is the least mature capability in the smart manufacturing stack. Only 𝟒𝟖% of companies have a training and adoption standard. Yet it’s the number one area they say they want to improve. And while 𝟖𝟓% believe smart manufacturing will attract new talent, more than a third say their biggest human capital concern is simply adapting workers to the factory of the future. We like the sound of digital transformation as long as it doesn't slow us down. We like the optics of AI as long as we don't have to redesign how we work. We like talking about the workforce of the future as long as we don’t have to train the one we already have. So yes, investment is rising. But if we don’t confront the outdated systems and assumptions holding us back, all we’re doing is layering expensive tech on fragile foundations. The biggest barrier to smart manufacturing isn’t budget, technology, or even talent. It’s us. 𝐂𝐡𝐞𝐜𝐤 𝐨𝐮𝐭 𝐭𝐡𝐞 𝐟𝐮𝐥𝐥 𝐫𝐞𝐩𝐨𝐫𝐭:  https://lnkd.in/e6_QsJcw ******************************************* • Visit www.jeffwinterinsights.com for access to all my content and to stay current on Industry 4.0 and other cool tech trends • Ring the 🔔 for notifications!

  • View profile for Armand Ruiz
    Armand Ruiz Armand Ruiz is an Influencer

    VP of AI Platform @IBM

    199,197 followers

    Guide to Building an AI Agent 1️⃣ 𝗖𝗵𝗼𝗼𝘀𝗲 𝘁𝗵𝗲 𝗥𝗶𝗴𝗵𝘁 𝗟𝗟𝗠 Not all LLMs are equal. Pick one that: - Excels in reasoning benchmarks - Supports chain-of-thought (CoT) prompting - Delivers consistent responses 📌 Tip: Experiment with models & fine-tune prompts to enhance reasoning. 2️⃣ 𝗗𝗲𝗳𝗶𝗻𝗲 𝘁𝗵𝗲 𝗔𝗴𝗲𝗻𝘁’𝘀 𝗖𝗼𝗻𝘁𝗿𝗼𝗹 𝗟𝗼𝗴𝗶𝗰 Your agent needs a strategy: - Tool Use: Call tools when needed; otherwise, respond directly. - Basic Reflection: Generate, critique, and refine responses. - ReAct: Plan, execute, observe, and iterate. - Plan-then-Execute: Outline all steps first, then execute. 📌 Choosing the right approach improves reasoning & reliability. 3️⃣ 𝗗𝗲𝗳𝗶𝗻𝗲 𝗖𝗼𝗿𝗲 𝗜𝗻𝘀𝘁𝗿𝘂𝗰𝘁𝗶𝗼𝗻𝘀 & 𝗙𝗲𝗮𝘁𝘂𝗿𝗲𝘀 Set operational rules: - How to handle unclear queries? (Ask clarifying questions) - When to use external tools? - Formatting rules? (Markdown, JSON, etc.) - Interaction style? 📌 Clear system prompts shape agent behavior. 4️⃣ 𝗜𝗺𝗽𝗹𝗲𝗺𝗲𝗻𝘁 𝗮 𝗠𝗲𝗺𝗼𝗿𝘆 𝗦𝘁𝗿𝗮𝘁𝗲𝗴𝘆 LLMs forget past interactions. Memory strategies: - Sliding Window: Retain recent turns, discard old ones. - Summarized Memory: Condense key points for recall. - Long-Term Memory: Store user preferences for personalization. 📌 Example: A financial AI recalls risk tolerance from past chats. 5️⃣ 𝗘𝗾𝘂𝗶𝗽 𝘁𝗵𝗲 𝗔𝗴𝗲𝗻𝘁 𝘄𝗶𝘁𝗵 𝗧𝗼𝗼𝗹𝘀 & 𝗔𝗣𝗜𝘀 Extend capabilities with external tools: - Name: Clear, intuitive (e.g., "StockPriceRetriever") - Description: What does it do? - Schemas: Define input/output formats - Error Handling: How to manage failures? 📌 Example: A support AI retrieves order details via CRM API. 6️⃣ 𝗗𝗲𝗳𝗶𝗻𝗲 𝘁𝗵𝗲 𝗔𝗴𝗲𝗻𝘁’𝘀 𝗥𝗼𝗹𝗲 & 𝗞𝗲𝘆 𝗧𝗮𝘀𝗸𝘀 Narrowly defined agents perform better. Clarify: - Mission: (e.g., "I analyze datasets for insights.") - Key Tasks: (Summarizing, visualizing, analyzing) - Limitations: ("I don’t offer legal advice.") 📌 Example: A financial AI focuses on finance, not general knowledge. 7️⃣ 𝗛𝗮𝗻𝗱𝗹𝗶𝗻𝗴 𝗥𝗮𝘄 𝗟𝗟𝗠 𝗢𝘂𝘁𝗽𝘂𝘁𝘀 Post-process responses for structure & accuracy: - Convert AI output to structured formats (JSON, tables) - Validate correctness before user delivery - Ensure correct tool execution 📌 Example: A financial AI converts extracted data into JSON. 8️⃣ 𝗦𝗰𝗮𝗹𝗶𝗻𝗴 𝘁𝗼 𝗠𝘂𝗹𝘁𝗶-𝗔𝗴𝗲𝗻𝘁 𝗦𝘆𝘀𝘁𝗲𝗺𝘀 (𝗔𝗱𝘃𝗮𝗻𝗰𝗲𝗱) For complex workflows: - Info Sharing: What context is passed between agents? - Error Handling: What if one agent fails? - State Management: How to pause/resume tasks? 📌 Example: 1️⃣ One agent fetches data 2️⃣ Another summarizes 3️⃣ A third generates a report Master the fundamentals, experiment, and refine and.. now go build something amazing! Happy agenting! 🤖

  • View profile for Andrew Feldman

    Founder and CEO, Cerebras Systems, Makers of the worlds's fastest AI infrastructure

    28,059 followers

    Water-cooled data centers are more energy-efficient than air-cooled data centers. Water can achieve higher thermal performance and lower energy consumption, resulting in increased energy savings and reduced operational costs. At Cerebras Systems all our data centers are all water cooled, which is part of the reason our systems consume so much less power per token than the competition....The benefits of water cooled data centers are... 👍 Higher Thermal Capacity. Water has a significantly higher heat capacity than air, meaning it can absorb more heat and transport it more heat more efficiently. 👍 Improved Heat Transfer Water can transfer heat away from components more effectively than air, allowing for a smaller, more efficient cooling system. 👍 Reduced Fan Power Water cooling eliminates the need for high-powered fans that are typically used in air-cooled systems, resulting in significant energy savings. 👍 High Density and Flexibility Water cooling allows for higher equipment density in data centers and greater flexibility in thermal management. 👍 Power Usage Effectiveness Water-cooled data centers achieve lower PUE values, which are a measure of the energy efficiency of a data center. 👍 Sustainability Reduced energy consumption and the possibility of using heat recovery systems make water cooling a more sustainable choice.

  • View profile for Vineet Agrawal
    Vineet Agrawal Vineet Agrawal is an Influencer

    Helping Early Healthtech Startups Raise $1-3M Funding | Award Winning Serial Entrepreneur | Best-Selling Author

    46,011 followers

    AI just helped a couple get pregnant - after 19 years and 15 failed IVF cycles. The breakthrough came with an AI tool built by a team at Columbia University. It’s called STAR - the world’s first AI system trained to find sperm that embryologists can’t. The husband had azoospermia - a condition where no sperm is visible under the microscope. Dozens of attempts, surgeries, and even overseas experts had failed. But the team at Columbia didn’t give up. They spent 5 years building STAR (Sperm Track and Recovery). The system scans 8 million images per hour using a chip and computer vision, then gently isolates viable sperm missed by even the most experienced lab techs. And it worked. ▶︎ STAR found 44 sperm in a sample that had been manually searched for two full days. ▶︎ That one breakthrough led to a pregnancy that had felt impossible for nearly two decades. ▶︎ And it did so without chemicals, donor samples, or invasive extraction methods. For millions of couples dealing with infertility, this is a glimpse of what AI-assisted reproductive medicine could unlock. But more importantly - this shows us what AI in healthtech should be aiming for: Not just more data. Not just smarter models. But real clinical results that change lives. And as a healthtech investor, this is what I look for in AI-driven care: → A clear pain point → A targeted intervention → And a story no one can ignore What’s your take - could AI reshape fertility care the way it’s starting to reshape diagnostics and mental health? #entrepreneurship #healthtech #innovation

  • View profile for Deedy Das

    Partner at Menlo Ventures | Investing in AI startups!

    108,274 followers

    NVIDIA's $7B Mellanox acquisition was actually one of tech's most strategic deals ever. The untold story of the most important company in AI that most people haven't heard of Most people think NVIDIA = GPUs. But modern AI training is actually a networking problem. A single A100 can only hold ~50B parameters. Training large models requires splitting them across hundreds of GPUs. Enter Mellanox. They pioneered RDMA (Remote Direct Memory Access) which lets GPUs directly access memory on other machines with almost no CPU overhead. Before RDMA, moving data between GPUs was a massive bottleneck. The secret sauce is in Mellanox's InfiniBand. While Ethernet does 200-400ns latency, InfiniBand does ~100ns. For distributed AI training where GPUs constantly sync gradients, this 2-3x latency difference is massive. Mellanox didn't just do hardware. Their GPUDirect RDMA software stack lets GPUs talk directly to network cards, bypassing CPU & system memory. This cuts latency another ~30% vs traditional networking stacks. NVIDIA's master stroke: Integrating Mellanox's ConnectX NICs directly into their DGX AI systems. The full stack - GPUs, NICs, switches, drivers - all optimized together. No one else can match this vertical integration. The numbers are staggering: - HDR InfiniBand: 200Gb/s per port - Quantum-2 switch: 400Gb/s per port - End-to-end latency: ~100ns - GPU memory bandwidth matching: ~900GB/s Why it matters: Training SOTA scale models requires: - 1000s of GPUs - Petabytes of data movement - Sub-millisecond latency requirements Without Mellanox tech, it would take literally months longer. The competition is playing catch-up: - Intel killed OmniPath - Broadcom/Ethernet still has higher latency - Cloud providers mostly stuck with RoCE NVIDIA owns the premium AI networking stack Looking ahead: CXL + Mellanox tech will enable even tighter GPU-NIC integration. We'll see dedicated AI networks with sub-50ns latency and Tb/s bandwidth. The networking advantage compounds. In the AI arms race, networking is the silent kingmaker. NVIDIA saw this early. The Mellanox deal wasn't about current revenue - it was about controlling the foundational tech for training next-gen AI. Next time you hear about a new large language model breakthrough, remember: The GPUs get the glory, but Mellanox's networking makes it possible. Sometimes the most important tech is invisible.

  • View profile for Arin Goldsmith
    Arin Goldsmith Arin Goldsmith is an Influencer

    Leading Employer Brand @ Blizzard Entertainment, Microsoft | LinkedIn Top Voice | Sharing my atypical journey to a fulfilling career | Not A Recruiter!

    85,247 followers

    5 things that aren’t super helpful for colleagues affected by layoffs and what I’m doing instead: 1. Asking them to tell me how I can help →  Proactively leaving LinkedIn recommendations and endorsing skills. 2.  Spamming them with random job postings I see →  Sending them jobs where I can make an introduction to the recruiter, hiring manager, or someone on the team. 3.  Sending their info to my recruiter connections with no context → Sharing their profile to my recruiter connections with specific job postings I know they are interested in. 4. Only supporting them privately →  Turning on notifications for them on LinkedIn to provide likes, comments, and hype to give their posts a boost in visibility. 5. Immediately jumping to offer feedback → Listening to see if they are looking to solve problems or just need to vent. It’s normal to be a bit lost and not know what to do, especially if this is your first rodeo (like mine). We have more power than we realize and little tweaks to how we approach things can make a huge difference. Good luck.

Explore categories