Podcast | Error Correction Breakthrough (20X Boost) — with Photonic

Quantum computing needs low-overhead error correction to truly scale. Building thousands of qubits to end up with a couple of useful logical ones feels like a bad strategy. Photonic recently published a paper describing a new type of error correction code that promises a 20X reduction in the number of qubits needed to run quantum algorithms that solve real business problems. Are these so-called SHYPS QLDPC codes the path to fault-tolerant systems? Will they help multiple types of quantum chips from other vendors get there? Join host Konstantinos Karagiannis for a chat about error correction and more with Stephanie Simmons from Photonic.

For more information on Photonic, visit https://photonic.com/.

Read the technical paper “Computing Efficiently in QLDPC Codes”

https://photonic.com/wp-content/uploads/2025/02/Computing-Efficiently-in-QLDPC-Codes.pdf.

Guest: Stephanie Simmons from Photonic

The Post-Quantum World on Apple Podcasts

Quantum computing capabilities are exploding, causing disruption and opportunities, but many technology and business leaders don’t understand the impact quantum will have on their business. Protiviti is helping organisations get post-quantum ready. In our bi-weekly podcast series, The Post-Quantum World, Protiviti Associate Director and host Konstantinos Karagiannis is joined by quantum computing experts to discuss hot topics in quantum computing, including the business impact, benefits and threats of this exciting new capability.

Subscribe

Read Transcript

+

Stephanie Simmons: When we started taking a look at what architecture we should build using these entanglement distribution-focused qubits, there were other codes available, and nobody looking at them. There’s a whole bunch of talent in the world that was working on this space, but nobody was working with them. That’s why we’re happy to share it, so that we can have more people working together to realize this dream for everyone.

Konstantinos Karagiannis: Quantum computing needs low-overhead error correction to truly scale. Building thousands of qubits to end up with a couple of useful logical ones feels like a bad strategy. Photonic recently published a paper describing a new type of error-correction code that promises a 20X reduction in the number of qubits needed to run quantum algorithms. Are these so-called SHYPS codes the path to fault-tolerant systems? Will they help multiple types of quantum chips get there? Find out in this episode of The Post-Quantum World. I’m your host, Konstantinos Karagiannis. I lead Quantum Computing Services at Protiviti, where we’re helping companies prepare for the benefits and threats of this exploding field. I hope you’ll join each episode as we explore the technology and business impacts of this post-quantum era.

Our guest today is the chief quantum officer at Photonics, Stephanie Simmons. Welcome to the show.

Stephanie Simmons: Thanks for having me.

Konstantinos Karagiannis: It’s been a long time coming. We met briefly at the Chicago Quantum Exchange, and I’ve been trying to figure out the best time. But news is always a good time, and your company definitely has some news to share. We’re going to talk a lot about quantum error correction today. Can you give a super-brief explanation on what that is to orient some listeners?

Stephanie Simmons: Error correction is in some sense the journey we’ve all been on or surrounding ourselves with or avoiding over the past 30 years in quantum. It was known that you could use quantum mechanics for amazing things computationally, back in the ’80s. But for the first 10 years, nobody went to try and build anything, because they had no idea how to handle the errors.

This sounds like a deep technical piece, but it’s the thing that’s going to unlock the entire industry. It’s so important. I’m grateful that you and your listeners are taking an interest because it’s central to the entire challenge of quantum. The question “When is quantum?” it’s, when do you get scalable application-grade, logical qubits? That’s the only question we need to be answering: How do we get this error correction working? What we’re happy to share with the world is a new way of doing error correction that is way easier and higher-performance than what people had assumed was needed. In some sense, the long-term projections for quantum have come in a whole bunch, so long as you can use these cool new error-correction codes.

Konstantinos Karagiannis: That was the big announcement on February 11, which is pretty recent, but this episode will be posting pretty soon after that. You guys announced a breakthrough with SHYPS QLDPC codes, and you’re going to explain all that, and up to 20X fewer qubits for error corrections. That’s huge. That’s a big reduction. Feel free to dig in on what that means for the field and how it changes the timeline to get to that practical quantum computing we were just hinting at.

Stephanie Simmons: We didn’t know that it was going to work when we went into it, either. We’re happy to share this with the world because it will move everybody to start rowing the boat in the same direction. But why does this matter so much? As I said, back in the mid-’90s, they figured out how, even in principle, you could do quantum error correction. Error correction sounds mundane. In classical systems, if a bit flips, you usually have a few copies of that bit you can then check to see, did one of these go wrong? Fine, let’s just carry on. There’s this easy redundancy you can put into your system to deal with these errors and not think about it too much.

Quantum can’t do that, because we can’t copy quantum states. The no-cloning theorem means you can’t copy quantum information. Error correction is this whole big deal. It’s possible. But the first views on how to make it work that it consumes so many resources and time that it seemed like decades away. That’s why sometimes people say, “These things are decades away” because the resources you’d need for those kinds of error-correction systems are just gargantuan, off the charts. It was a major breakthrough a few years ago when they only needed 20 million physical qubits to do something useful. Now, remember, we’re hearing in the news, “We’re at 100 or 200.” There’s a big gap there. No wonder people think these things are decades away. They’re, like, “That’s where the goalposts are, and that’s where we’re at and that’s our pace of progress for the past 10 years. Why on earth would we think we would cross those goalposts in any reasonable time?”

But the answer is, those aren’t the real goalposts. Those, those error-correction codes are called the surface codes. You’ll see them everywhere. If you Google “surface code,” it’s hundreds of thousands of hits because it has been the de facto standard for quantum. People assume that’s how you do error correction for quantum. But those are the ones that have the big overheads.

One of the things we went into with this business at Photonic, we were taking a look at a bunch of assumptions people had made and making sure they still made sense for our architecture. Here’s how it gets even more technical: The reason you use those surface codes is that they’re provably optimal so long as you work with one assumption, which is that your qubits talk to each other through proximity — nearest neighbors — they smush on top of each other, beside each other, that that’s how interactions work. If you take that assumption, everything falls out and you have to use these codes.

But we’re working from a distributed-computing architecture, so we’re thinking about a distributed quantum computer from day one. But because of that, we know that proximity doesn’t constrain us at all. In fact, any large-scale quantum system that works in that supercomputer regime, where it’s a distributed computer over lots of nodes, those won’t be constrained to proximity. What code should we look at? It’s not given that you’d use the old-school codes. We went into this a couple of years ago, but it was known in 2010 or so that better codes existed. You don’t need 10,000 physical qubits for one application-grade logical qubit. You can have 100 or 50. That’s a lot more exciting with the kinds of numbers we have today.

But only if we could crack those codes. If only somebody could figure out how to use those codes in a quantum system, because that required a breakthrough that nobody knew how to do. In fact, lots of people suggested it might not be possible to do. What we were happy to share is that there’s this whole new playing field we haven’t been exploring that make the resource requirements way less, which means making commercial value sooner for everybody who can use those codes. That’s why we’re excited to share that with the world, because it does represent a moment where it might not even be the codes we introduced in this particular work that end up being the winners. It doesn’t matter so much as, let’s start breaking free of those old assumptions. Codesign for something that offers value as soon as possible, and hit the Go button together. That’s what we’re able to share.

Konstantinos Karagiannis: Before we dig into how you got there, I have question that might come to mind for listeners: Is this work dependent on one specific modality of qubit, only one type of machine, or does it have the chance of affecting multiple machines?

Stephanie Simmons: In some sense, you can invert the question. You could say “for codes this good” because they’re not just like smaller, they’re not just faster. They have better decoders. You don’t need as many error-correction rounds. There are all these other things that make them better. These codes are just so much better in every way that it’s going to be the case that lots of modalities find a way. They engineer a way to use these codes because otherwise, you’re stuck with 20 times more cryostats or 20 times more vacuum chambers.

Of course, you’re going to find a way. You’re going to engineer, you’re going to start doing the codesign for the best codes available because it’s just so central to the quantum stack. That’s going to be the way it goes. To answer your question more directly, I see, from what I know of all these other modalities, ways that all those other modalities are going to try. It’s a very reasonable go. Some of them can do it very easily and some of them not so easily. But they’re all going to give it a go.

Konstantinos Karagiannis: It’s not excluding. It’s just that it might take a little bit of effort for some of them.

Stephanie Simmons: The real critical point is connectivity. It’s interesting because we’ve learned that same lesson in classical computing. Take a look at how hyperscalers work, all these workloads. The connectivity and the IO ends up dictating the performance of the system. We’re finding the same things happening for quantum. The way you can connect your qubits gives you the ability to apply these codes. That’s the only major difference: Do you have the connectivity to make this work? Great. Then use these better codes, connectivity.

The other piece that’s going to come through is the IO. How do you get the quantum IO, which in this case is entanglement? How do you get the entanglement where it needs to be? That’s the IO bottleneck for many systems: If you engineer for that, you’re going to do well at scale. Those are the pieces we’re learning that were learned in the classical system, but we’re seeing it again play out in the quantum world.

Konstantinos Karagiannis: Of course, the goal is always the best ratio possible, because you don’t want to have a million qubits and then three logical ones.

Stephanie Simmons: What’s the ultimate goal? Something relevant to people, like commercial value. That comes about with cost of goods. You have to think about reasonable cost of goods, and how much does it take to run one of these things from an energy perspective, from a capital-expenditure perspective. But you also have to deliver something in a reasonable amount of time. It is the full package: How is something valuable? How does it have utility? Yes, size is one of those things, but it’s the full package, and we have to be designing for all of it.

Konstantinos Karagiannis: You and your team have been working on this for a while. This is a long problem. It’s decades long as a challenge. What key insight all of a sudden got Photonic to crack this problem now?

Stephanie Simmons: It was serendipity, a little bit. Truthfully, it is still very much a zero-to-one space. You feel like it’s decades old and these modalities are locked in. That is a little bit too soon because there’s a lot of innovation happening within each modality, and you could even decide to call it a different modality with how many changes are coming through. At the basic level, it was the case 10 years ago or so.

Our company, Photonic, only hit the Go button in 2021. We’re relatively young. I went hunting for qubits about a decade ago because I was looking for something that was best for entanglement distribution. We can get back to that. But when we started taking a look at what architecture we should build using these entanglement distribution-focused qubits, there were these other codes that were available, and nobody was looking at them. There are these old academic papers that say, “Even though you can’t implement these codes in nature, it’s still an interesting academic exercise to go and explore.” People had no clue that these things could even be achieved because they weren’t thinking that connectivity with quantum was possible. There’s this deep belief that proximity is how you get this thing to work. If you relax that and you do an entanglement first piece, all those assumptions fall away and you have other upsides.

Fortunately, the reason we were able to make progress on this is because maybe it was just a bit of hubris, but I looked around at all the codes out there and I said, “Let’s go and see if we can make these ones work.” If it pays off, then outstanding. We can take the timelines in by a huge amount, and if not, we just use the surface code like everybody else.

There’s a whole bunch of talent in the world that was working on this space, but nobody was working with them. We brought them all together and put smart people doing smart things, and magic happened. But it needed that little push to start the snowball. That’s why we’re happy to share it with the world, so we can have more people working together to realize this dream for everyone.

Konstantinos Karagiannis: Everyone should be working toward using these codes in their qubits.

Tell us a little bit about your qubits and how your modality differs.

Stephanie Simmons: It’s been a journey. I came up through the ranks. I’ve been in the space since 2001, and I took a look at all the modalities out there and what they were working on. Many of the options out there weren’t thinking about the full-scale distributed system, in my view. From the outset, everybody was still working on getting the basic components to a degree of functionality. Rightly, that was how the approach was in academia. But once you start working on this from a company perspective or a commercial perspective, where you’re trying to deliver something useful to people as soon as possible and you make the assumption that you need error correction to do that, then you take a different systems-engineering approach and you think about the feasibility of distributed quantum computing.

What do I mean by distributed quantum computing? Taking some number of quantum bits — ideally, a large number of quantum bits that can work together over a network. When you think about scale, even if you could work with these amazing codes, you still need lots of qubits and you would still benefit by having more qubits than you need because you can always make space–time tradeoffs to make things go faster. You’re going to want lots of resources. Classical computing, you want the resources. It makes everything better. Having that scalability baked into the way you think about things matters a great deal.

The challenge to almost every quantum architecture out there, by being based on proximity — literally, smushing cubits on top of one another — means it’s hard to get clean quantum signals out of the box, whatever box they’re in. All these systems are in some box because they need to have environment controls. It’s either ultra-high vacuum or some chip and a cryostat or whatever. It’s some box, and they’re all limiting in the sense that you can get them bigger, but that becomes harder and harder and takes more R&D and engineering to make these boxes bigger. It’s way easier if you can take different nodes and link them together if your physics allows for that. It does matter what physics you’re working with.

As a professor 10 years ago, I went hunting for what I thought was the unlock for that scalability piece. We knew that silicon spin qubits were amazing. Spins, in general, are amazing. But what was needed was that telecom link: How can we get a good matter-based qubit to send out a photon? Photons are great at room temperature and they can link across data centers and whatever, especially telecom ones. You don’t need to rewire anything. You don’t need to do any transduction or loss. Everything’s low-loss. You can have it work well.

I went hunting, and I thought it was going to take 30 years to find the thing that was going to work. But we found it in a couple years, and that’s when we hit the Go button from a commercial perspective to do this. What we work on is called the T center. It is a small molecule trapped in silicon, very reproducible. It forms naturally if you have the constituent atoms around, and it spits out single telecom photons that can be entangled with the spins that are left behind directly. This is phenomenal for density. We’ve printed a million on a chip.

The controls are much more challenging than the qubits. We’re scaling the controls now. But the qubits themselves, you can print at scale, and they can link out via telecom light. We’ve already gotten multiple quantum cores working together in a distributed quantum computer in a basic form. It’s getting better every month. These spin cubits are phenomenal, and we can iterate quickly because it’s silicon and it’s telecom. It has that engineering. It’s got a lot of competitive advantages that allow us to move as quickly as we have.

Again, these are very young cubits. They’re five years old. But it’s based on a lot of old physics — high-quality spins — the same physics that allowed us to set three-hour coherence time records a decade or so ago. Spins in silicon are amazing. What’s different about these is that you can get them out of the box and scale horizontally. That’s our basic technology. We’re excited to be moving as quickly as we are with it.

Konstantinos Karagiannis: That sounds great. Anyone who’s listened to a lot of modalities and designs, they’ve already heard some key things in what you said — this idea that you’ve already got little boxes talking to each other to build a bigger box, which is the dream for everyone right now, to keep scaling up. We’re going to want to dig a little bit on that, and I’m going to try and put you on the spot and get some numbers out of you — some sense of scale and where we’re going. But before we do that, to keep it real, what would be a real-world application that could benefit from this reduced overhead that you’re pushing for right now with these error-correcting codes?

Stephanie Simmons: Every single one of them, to be a little bit flippant. It wasn’t clear when people went into commercial quantum about 10 years ago and people started trying to use these noisy intermediate scale quantum systems. It was not known. Even earlier than that — annealers and the rest of it — it wasn’t even known if there was commercial value there. However, it was known that they could access computational space that could not accessed by classical systems in principle. It was totally valid to try and see what kind of heuristics might emerge to deliver good ROI without error correction.

I was happy to see that early work come through because had there been a heuristic that emerged — a heuristic is an algorithm that doesn’t have a deterministic answer but just seems to work. Optimization things fall into that category. Even AI, at some level fits into this category. They just, at a certain scale, seem to work. Now, because it doesn’t have that solid complexity theory backing to it all the time, it was worth the adventure to go see.

My tack was always to go. I was young coming into this space. I fell in love with quantum when I was 16. I wanted the full career. I knew I had to play the game where we had to think about the large-scale error-corrected system, and that’s where I spent my cycles. But every other application we know of that has commercial value is in this error-corrected space — every application you think of, including even the heuristics that may click in. There are quantum heuristics that might start showing value once you can rely on the systems, once they’re error-corrected. The entire world of quantum opportunity relies on error-correction benefits from these codes, so long as you can use them. That’s why we’re excited to share them with the quantum world and beyond.

Konstantinos Karagiannis: Is there some pet use case you always are pushing?

Stephanie Simmons: There’s stuff that gets me up in the morning. But the whole point is that this is an industry and it’s going to be used on a platform basis on so many things — any typical application you hear of can be applied in a universal SOC system. That’s the point.

Konstantinos Karagiannis: Back to your vision of a networked quantum-computing world, we’re not going to deep-dive into networking in general, because if you ask 10 people about quantum networking, you’ll get a hundred answers. We’re not going to go too deep into what a quantum internet is.

Stephanie Simmons: I’d be happy to. You’re right.

Konstantinos Karagiannis: There are lots of different views of what people want to use it for, but for now, the idea of putting these systems together to act as one, how close are we to seeing this deployed at scale? What challenges remain in linking and getting these to work as a system that you could just log in cloud access and all of a sudden, I’ve got 4,000 qubits or something. How’s that roadmap looking?

Stephanie Simmons: I can speak to the broad ambition because in some sense, when you’re asking about that roadmap, you’re asking about almost all the quantum roadmaps out there now. About 10 years ago, everybody only cared about T2 times and fidelities. That’s the only thing they talked about. Now, we have to recognize that one perfect qubit is perfectly useless. The only reason we have value is when you can link in something like 400 to 2,000 application-grade logical qubits. That is where the money is. We know it might be that there’s something that comes in earlier. But we know that’s where there’s going to be a lot of value for everyone.

How do you get that when each one of the boxes out there constrain your system? There are some approaches out there that hope to hit something like a million or so in one box. That is going to be many R&D cycles to do that. That’s what it like. You can say the same thing for classical computing. It’s taken decades to get the number of transistors onto one chip. But you could see even earlier that networking many smaller systems together delivers that kind of value quickly, so long as you can hit those performance numbers.

For us, we’ve already done distributed computing. It needs to go faster. But the physics of the solid-state solution we have does allow it to go as fast as we need it to in principle. That’s why we’re happy with it. Again, these are five-year-old qubits. Making sure there are no unknown unknowns as you’re going through is important, and we don’t see any so far. It seems to be fitting the models. But we need to go faster, and if that’s enough, we can go fast enough to use these codes. Then we have all the pieces already lined up because we have this entanglement-distribution piece.

If you take a look at what’s called the quantum-resource estimation or what it takes to build a working quantum system at scale with the right kind of resources, most teams, if not almost all teams, ignore the costs of the quantum IO. This is the cost of entanglement — getting entanglement to a good degree across many systems. If this is a hidden cost, which is the dominant cost in many systems, unless you’re engineering for that, it’s going to be a lot like classical systems. Unless you’re engineering for solid connectivity and IO at scale, those systems won’t perform well enough to deliver value.

Konstantinos Karagiannis: Not speaking for the whole industry in general, would you have a guess as to when we’ll see x number of logical qubits from Photonic?

Stephanie Simmons: It’s definitely years, not decades. You do have others that are saying similar things. It’s going quickly, but we’re not going to put dates just yet. But it’s very encouraging.

Konstantinos Karagiannis: You said a couple of things — a million on one chip. We all know something that was just said days ago about that. Our last episode was about that.

Stephanie Simmons: They’ve done a great job thinking about what it does from a systems-engineering perspective. If you don’t network a system and you want to do a full scale-up solution, you have to think carefully about that. They have been, so scaling the controls is a big deal. I don’t want to take that million-qubit chip out of context. That’s a manufacturability and quality-assurance piece for us. The controls is a massive piece of this puzzle, not just getting the entanglement where it needs to be and the yields up and the rest of it. This is like a beautiful multidimensional engineering challenge where you need to benefit from everything you can, including better error-correcting codes. If that can take the count down by a factor of 20, I’ll take it.

Konstantinos Karagiannis: You’ve already mentioned that silicon is a big part of your design. How do you take advantage of that? What kind of edge do you see that giving you? Obviously, we are a silicon world. We’ve been building for this technology for a long time. It’s not that new. It’s just that now they’re qubits too.

Stephanie Simmons: That’s exactly it, and in some sense, we get to benefit from it in two ways. What you might not appreciate is that it’s the best quantum material out there too, partly to do with the fact that we’ve purified it to death. We know how to make super-clean silicon for the semiconductor industry. But because it’s a Group 4 element, it has a low magnetic-noise environment. There are all these physics pieces that make it an excellent environment for spin qubits, and we’ve known that for a while.

But it’s not just the quantum performance, which of course you need to hit; it is the dominant photonics platform. The name of the company is Photonic, and we named it that way because that’s the most nonnegotiable glue of a distributed-network quantum computer. At some point, you’re going to hit the limits of your box and want more. You’re going to want to network these things together. If you take that as a given, there’s no option from a commercial sense other than photons. That is the glue that puts it all together.

But what we can offer in the silicon is high-density and high-quality storage of information. As I shared previously, we’ve demonstrated three-hour coherence times for spins and silicon, and many nines and all the rest of it. These things are solid. It’s like a trapped ion in a solid. It’s one term you could think of, as a semiconductor vacuum. These things are trapped there. Because of that, you can manufacture quickly. We turn around chips quickly and we have independent manufacturing chains. We know how to work with silicon. But it is essentially how you print fiber optics on a chip, using silicon photonics. You do want to have that baked into your system, not just telecom. We put our qubits right into those printed fiber optics on the chip. You collect all that light and you don’t need to worry about recooling or anything. You can kick it hard. There are all these little competitive advantages that make a difference. We’re happy with how well it’s moving.

Konstantinos Karagiannis: Would you say there’s some kind of an advantage tied to your platform around these codes? And do you think this is going to create some kind of arms race now, like, “We’re going to use those codes better and show Photonic”?

Stephanie Simmons: It’s going to be a very dynamic next couple of years, and the sooner we can, as a community, demonstrate commercial value, the more the rising tide lifts all boats. I see no harm in, if we can go and show a killer app, we’ll be able to hit there first. But if there’s something else that comes out first, that’s still good for the industry, and if it uses these codes, excellent. What we do know is, in the long term, a solid-state solution to this kind of engineering challenge looks very good, so long as there are no unknowns that creep in. These things do look like they can have the metrics needed to deliver something useful at scale at a great price point. That’s what we’re just heads-down on execution at the moment. If there are other things that come in to accelerate things, then so much the better.

Konstantinos Karagiannis: Chief quantum officer is a pretty awesome-sounding title. That wouldn’t even have existed a while ago.

Stephanie Simmons: I took that one in 2016.

Konstantinos Karagiannis: That’s cool. What’s your biggest priority right now for your company?

Stephanie Simmons: Execution. Execution. Execution. You have to get your head down, in focus. We know where the there’s a lot of work in the industry about where these commercial advantages and where these exponential speed-ups and commercial applications exist. We know what performance metrics we’re going to need to hit to deliver them. Everybody inside is very motivated. It is about making sure we move quickly because as we’ve shared previously, all these modalities, they have 30 years under their belt. They have so many academic researchers around the world that have buckets and buckets of work that are contributing to that effort so they don’t need to do that work in-house. We have to be extremely smart and make smart choices every single day so we can compete and ultimately deliver value soonest. It’s all about execution, and that’s a bit of a boring answer, but that is what we get up in the morning and think about.

Konstantinos Karagiannis: It does lead me up to a question I’ll probably leave us with here. What would you think is the next big breakthrough you’re aiming for in this execution path? Is there some next milestone that once you hit that, you’re, like, “This is the next thing that SHYPS is helping”?

Stephanie Simmons: What we like about this platform is that we’ve proven out all the basics. A lot of the systems that have to go back and network their systems have to go back to the drawing board in a fundamental sense because the amount of entanglement you need is a lot if you want to do anything useful at a reasonable rate. You have to redesign almost the basic functionality of your qubit. We don’t have to do that. We’ve already gotten distributed quantum computing working. We just needed to go faster, and people know how to iterate and improve. We’re heads-down on iterating and improving, but we don’t need to go back to the drawing board at any point to add functionality. That core functionality has all been proven out. We need to get it to the point where it’s fast enough to do high-grade logical qubits.

On a distributed computer, that’s the unlock. At that point, if you can get distributed logical qubits, it only is a matter of money. Then you can just go and start imagining building out larger-scale systems that deliver the value needed for commercial applications, and you could think about working on costs and the rest, but we don’t need to redo any physics or learn anything new. Everything we’re doing is all about heads-down on executing to that goal.

Konstantinos Karagiannis: It sounds like there’s a whole lot of interconnect secret research going on at Photonic right now.

Stephanie Simmons: This is a good point. Even the term interconnect presupposes a certain way of looking at things. You imagine even just classically, “We have a computer, and then we just stick on some transduction thing or some NIC card, to get the signal out.” But that’s not the way. If you want to do logical operations between high-efficiency quantum error-correction blocks, you almost need every single qubit talking to its copy. You need a huge amount of entanglement to make that not be the rate-limiting step for your system. There are caveats and asterisks around that, but the more you have, the better, even when you do clever compiling.

Even the term interconnect presupposes you take something and change it into something else. For us, it’s in the qubit itself. We have what’s called a spin-photon interface. That’s true for ions, but any solid-state color centers are like this. They talk to photons and they store their information in their spins. It’s one object that has both interfaces. They can talk to both degrees of freedom just as well. That’s important because it’s not just about interconnect. It is, like, “Let’s work backward from success.” What does it need? What do you need to deliver a large-scale distributed quantum computer? You actually need this baked right into the core of what you do, and then that scale can come naturally.

Konstantinos Karagiannis: It sounds great. I can see a pathway forward in turning your execution into something practical in the world with that approach for sure. It sounds like you’re building right from the ground up. Lots of luck, and I hope to have you on before we know it to share some systems we’re accessing on the cloud and benefiting from.

Stephanie Simmons: Thank you so much for your interest.

Konstantinos Karagiannis: Now, it’s time for Coherence, the quantum executive summary, where I take a moment to highlight some of the business impacts we discussed today in case things got too nerdy at times. Let’s recap.

Photonic wants to slash qubit overheads and reimagine how quantum systems connect. We all know quantum computers are fragile and error-prone. For decades, the field’s been wrestling with how to tame those errors. Error correction isn’t just a technical footnote — it’s the key to unlocking quantum’s full potential. Old-school surface codes demanded thousands of physical qubits just to produce one reliable logical qubit. A few years back, estimates pegged 20 million physical qubits for anything useful. With labs boasting just 100 or so physical qubits today, it’s no wonder skeptics pegged quantum as decades away. But Photonic unveiled quantum low-density parity-check QLDPC codes that slashed the qubit requirement by up to 20 times: Think 50 to 100 physical qubits per logical one with these new SHYPS codes.

Surface codes thrived on the idea that qubits only talk to their nearest neighbors. But Photonics is betting on a distributed quantum architecture where connectivity isn’t bound by proximity. This opened the door to dusting off decade-old academic papers on better codes no one thought would work in practice.

Photonic spin qubits are tiny molecules trapped in silicon, spitting out telecom-wavelength photons that can zip through fiber optics. They’ve printed a million of these on a chip and are working on controlling them at scale. The path forward relies on distributed quantum-computing linking nodes across rooms or data centers.

Photonic has already demonstrated entanglement over a network. Photonics SHYPS QLDPC codes thrive on this type of connectivity. Stephanie expects other modalities to adapt to take advantage of the codes too. Photonic is focused on an approach of iterate, accelerate and execute in this march toward fault tolerance. The goal is commercial value delivering something useful at a sane cost and timeline. They’re racing to get distributed logical qubits online in years, not decades.

That does it for this episode. Thanks to Stephanie Simmons for joining to discuss Photonics work in error correction, and thank you for listening. If you enjoyed the show, please subscribe to Protiviti’s The Post-Quantum World, and leave a review to help others find us. Be sure to follow me on all socials @KonstantHacker. You’ll find links there to what we’re doing in Quantum Computing Services at Protiviti. You can also DM me questions or suggestions for what you’d like to hear on the show. For more information on our quantum services, check out Protiviti.com, or follow Protiviti Tech on X and LinkedIn. Until next time, be kind, and stay quantum-curious.

Loading...