You are here
The closest you can get to perfectly secure Bitcoin transactions (without doing them in your head)
@pa2013 helpfully posted Alon's BitKey announcement from last week to the Bitcoin Reddit, which sparked an interesting discussion regarding whether or not you can safely trust BitKey to perform air-gapped transactions. I started responding there but my comment got so long I decided to turn it into a blog post.
Air-gaps are not enough
As the astute commenters on Reddit correctly pointed out, just because you are using an offline air-gapped computer doesn't make you safe:
For example an offline computer can have a tainted random number generator, modified to only feed you addresses that the attacker can sweep at a later point in time.
I agree 100%. There are many ways a tainted air-gapped system can betray you, including smuggling out your secret keys via covert channel (e.g., USB keys, high frequency sound, covert activation of Bluetooth/wifi chipset, etc.)
The good news is that:
- Even if you assume BitKey is evil you can still use it to perform a highly secure Bitcoin transactions. Details in the "If I tell you I'll have to kill you" section below.
- Most of the attacks against air-gapped systems are hard to hide if you build your own image from source.
The bad news is that:
-
Most people won't build from source.
-
Without deterministic builds you can't tell if the system image you are using is a faithful representation of source code.
A deterministic build means that everyone that builds from source always get exactly the same binary output, bit for bit.
-
You can't trust RNGs without deterministic builds. A properly designed "evil" RNG looks just like a "good" RNG. Just by observing the output it is possible to prove that an RNG is insecure but absolutely impossible to prove that it is secure.
Random Number Generators are the perfect hiding place for a backdoor
The makes RNGs the perfect place to hide backdoors. I'd bet money that RNG-level backdoors are where intelligence agencies like the NSA are focusing their efforts to compromise Internet security.
For this reason I personally don't trust RNGs at all when the stakes are high. Any RNG, not just the one in BitKey.
Why?
Even if you audit the source code that the RNG is being compiled from, you still have to trust that the compiler is translating source code faithfully, and worse this turns out to be a recursive problem that was recognized was recognized waaaay back:
A solution you don't have to trust is better than one you do
In its current form BitKey is a swiss army knife of handy Bitcoin tools which you could use to implement any workflow. What's interesting is that this includes at least one workflow which don't require you to trust BitKey. I call it the "If I tell you I'll have to kill you" workflow.
But first, we need to recognize that there is an inescapable trade off between convenience and security and since the risk is proportional to the value of your wallet it doesn't make sense to enforce a specific trade off. We want to help make the most paranoid usage model practical for day to day use but at the same time, we want to create tools that let the user decide. For low value wallets maybe you're willing to trade off some security for better usability.
On the flip side, as someone who uses BitKey to perform very high security transactions routinely, once you get the hang of it's not too much trouble to go a bit overboard and sleep well at night. Better safe than sorry.
If I tell you I'll have to kill you
It turns out you can create secure Bitcoin transactions offline without having to trust the system performing the transaction. Do that and you can mostly dispense with having to trust third parties.
This is a good thing because trusted third parties are a security hole:
http://nakamotoinstitute.org/trusted-third-parties/
Instead of trusting the solution you just have to trust the security protocol and its underlying assumptions, which you can verify yourself.
The trick is:
- Don't use the RNG. Provide your own entropy. Use a dice!
- Assume BitKey is evil. Work around that by enforcing a strict flow of information to prevent it from betraying you.
For example, let's say there are two computers: BLUE and RED.
I'm calling this the "If I tell you I'll have to kill you" model because once you give BitKey access to the secret keys in your wallet you assume it will try anything to smuggle them out back to the attacker. To prevent that you will have to quarantine BitKey, get the signed transaction out, then kill it.
Now let me translate how that works in practice.
BLUE is a regular Internet connected PC, running a watch wallet (e.g., BitKey in cold-online mode, or Ubuntu running Electrum in watch-only mode). Connected to BLUE PC is a BLUE usb drive.
RED PC is an air-gapped PC that has no hard drive, NIC, Wifi, Bluetooth, sound card, etc. It only has a keyboard, monitor and USB port.
Next to RED is a RED usb drive. It is NOT plugged into RED. (yet)
On BLUE you create an unsigned transaction and save it to a BLUE usb drive.
On RED you boot BitKey into RAM (e.g., in cold-offline mode). You then plug in the BLUE usb drive and copy over the unsigned transaction into RAM. Then you unplug the BLUE usb drive.
At this point RED has the unsigned transaction in RAM but it can't sign it yet because it doesn't have access to the wallet.
So you plug into RED the RED usb drive that contains the wallet. You sign the transaction. You encode the JSON of the signed transaction as a QRcode. You read the QRcode with your phone. Verify that the inputs and outputs are correct. You broadcast the signed transaction to the Blockchain from your phone.
Then you reboot the RED airgapped computer and leave it turned off for a few minutes to take sure the wallet has been wiped from RAM.
The only thing coming out of RED is the QRcode for the signed transaction and you can verify that with a very simple app on a separate device like your phone.
It's not perfect security, because an evil BitKey might conspire with an evil phone by creating an evil QRcode that sends your Bitcoins to another address or leak your private key.
But it's as close as you can get without doing the transaction in your head, and BitKey has all the tools to let you do that.
Areas for improvement
- Improve usability by adding a self-documenting wizard:
Improve usability and reduce the potential for human error by adding a wizard mode in which BitKey guides you step by step in performing secure air-gapped Bitcoin transactions.
- Port BitKey to work on the Raspberry Pi:
I recently bought a few Raspberry Pis for this purpose. A $35 air-gap running BitKey on super cheap open hardware woud not only be cheap and practical it would also prevent us from having to trust our PCs / laptops not to be compromised at the hardware level. On a typical laptop / PC there are way too many places for bad stuff to hide, though I expect the truly paranoid will wrap their Raspberry Pi's in tinfoil just in case.
Also, I think this would be a great opportunity to get TurnKey in general working on the Raspberry Pi.
How deterministic builds fit into the puzzle
Deterministic builds are another way around the problem of having to trust third parties. As seen above, we can get very good security without them, but only by assuming the system we are using is already compromised and limiting how the poison can spread.
But for many applications that just isn't practical. Often you need a two way information flow (e.g., privacy applications) and there are too many ways for a malicious system to betray you.
Full system deterministic builds are going to be essential for those usage scenarios. It's not a silver bullet but unless everyone's build system is compromised, you can at least rely on the translation of source code to binary is faithful.
This improves security dramatically because:
-
With deterministic builds you don't have to trust us not to insert backdoors into the binary version of BitKey.
I trust myself not to do that but coming from a military security background I can easily emphasize with anyone that doesn't.
-
You also don't have to trust us to be capable of resisting attacks by Advanced Persistent Threats (AKA super hackers) that might figure out how to worm their way into our development systems.
Personally, I believe it is unwise to expect any single individual or organization to resist truly determined attacks. If the NSA wants into your system they are going to get in.
The problems with deterministic builds are:
- You still need to audit millions of lines of source code.
- We don't have full-system deterministic builds yet. Nobody does. That's something a lot of people in the free software world are working on though.
Comments
BitKey can provide better security than Trezor
Thank you for your comments. It gives me a chance to clear up misunderstandings other people might have but refrain from voicing.
When used in the right way BitKey can provide better security than Trezor.
Don't get me wrong, I'm not saying Trezor is worthless. Once it comes out Trezor is going to be a great addition to the Bitcoin toolbox.
However, for high value wallets, I'd much prefer a solution that unlike Trezor:
Frankly, as convenient as it might be, a USB interface for a device with that kind of risk model is very unwise. Especially for people with high value wallets, which would presumably be the first in line to use Trezor. An optical channel based on QRcodes or better yet OCR of human readable text would be much more secure.
That's the idea behind BitKey. The solution we wanted didn't exist so we created it ourselves.
Hardware wallets like Trezor are not a silver bullet
Sure, when Trezor finally ships I'll be one of the first in line to use it, but if the stakes are high enough I won't trust it absolutely but only as another line of defense (e.g., I'll connect it to an air-gapped BitKey and refrain from using its RNG).
Why? Because trusted third parties are security holes and the way Trezor has been designed requires you to to trust Trezor. That's dangerous. I don't think we can 100% trust Trezor's developers, Trezor's manufacturing process, the ability of Trezor's developers to resist attack against advanced threats, etc.
If Trezor fails, it fails catastrophically. For example, Trezor communicates via USB which has a significant attack surface. Not just at the software level (E.g., the kernel USB stack), but also at the hardware level. Especially when you factor how few sources there are for USB chipsets. It's a very low margin business which has made it a central point of failure.
I wouldn't be surprised if an attack against the USB stack that was made possible with the helpful intervention of a friendly intelligence agency wouldn't eventually leak out (or be independently discovered) and exploited by criminals. Or former NSA contractors looking to make an extra billion or two.
Regarding the rest of your comment. The title of your comment does it justice. I'm trying to figure out what you were responding to. You either didn't bother to read, or were so biased by a state of mind in which pretty much everyone can be safely assumed stupid that you found a way to jump to that conclusion anyway.
So I'm tempted to ignore your straw man arguments altogether but others might be confused by it so I'll respond:
Also, we don't really care if TurnKey is the first to do that in the free software world. We care about solving this problem, not winning a race. That's what's great about free software. Like math or science, when someone figures out how to solve a problem everybody can build on it. Everybody wins.
Trezor Developer Tomas Dzetkulic I presume!
Hi Tomas, what an honor. I now understand what is driving the desire to dismiss the problem BitKey solves but your agression is misplaced. In fact, I'm a big fan and I think you guys are doing great pioneering work, even if I disagree with some of your design decisions. Which is OK. You rarely get it perfect the first time around. With that said, as soon as Trezor comes out and I can get my hands on one we'll be adding supporting for it to BitKey. Like I said previously I think hardware wallets will be a valuable addition to the BitKey/Bitcoin toolbox.
Regarding your questions:
A solution (like BitKey) which you don't need to trust is inherently more secure than a system that you do need to trust (like Trezor). If you don't trust me as an authority in this matter, read what Satoshi Nakamoto has to say about this: Trusted third parties are security holes.
To use Trezor I have to trust it on many levels. Opening up the code and hardware on your end doesn't prevent me from ending up using an evil version of Trezor that steals my Bitcoin.
I think that's more likely to happen because Trezor by its nature is an enciting attack target and hence a central point of failure. I need to trust Trezor developers. I need to trust your production line. I need to trust every single entity on the shipping route incuding your fulfillment company, the mail carrier, my mail man.
It doesn't matter that the Trezor hardware is "open" if most users (who I assure you are not comfortable around a breadboard) will only get their hardware wallet from one Bitcoin related source and have no easy way to verify that the technical description is faithfully translated into hardware.
This is true for any hardware platform but I think we can safely assume your production line is going to be a more of a target for attack than a truly open hardware platform such as the Raspberry Pi, which is designed by a non-profit organization and produced by multiple manufacturers.
Why? Because security depends on architectural design decisions more than it depends on the quality of implemention.
In the real-world, components and implementations are imperfect translations of their creator's intention. The only exception is perhaps software that has undergone formal verification that mathematically proves its correctness. To the best of my knowledge no part of Trezor has been formally verified.
Since an imperfect component can fail, the first question a designer of any real-world security system should ask is - what happens if it does?
The house of cards model - the security architecture is interdependent so the various components rely on each other to provide security. If one part fails, the security of the system as a whole crumbles, much like a house of cards, or a one-legged table.
The military fortress model - the security architecture is independent so the various components reinforce each other such that if one part fails, the security of the system remains robust. Much like a a table with 8 legs, or a military fortress that has multiple layers of defense. Defense in depth they call it.
FWIW, I don't think QRcodes are the ideal solution either because they are not human readable. I would prefer optical character recognition of human readable text, so you can actually see what it being transmitted.
It all comes down to complexity and assumptions
While still imperfect, communicating with QRcodes is much safer than communicating through USB for two very simple reasons:
1) Complexity
QRcodes generators/decoders have a smaller attack surface because they are orders of magnitude simpler than USB interfaces. This is because they only have to do one very simple thing: encode and decode a string of text.
2) Assumption of trustworthiness vs assumption of untrustworthiness
If you use BitKey in the "if I tell you I'll have to kill you" model described above, the only thing coming out of the air-gap is the QRcode of the signed transaction. That output is verifiable by a human in the loop on a separate independent, generic device
More critically, each message passing operation needs a human in the loop at the physical level because the user needs to physically scan the QRcode into a separate independent device where additional verification can be run. If you don't trust your Internet connected phone, you can buy another phone that is always in Airplane mode and do the verification there.
Your wallet can survive an evil BitKey, but not an evil Trezor
In other words, if BitKey is evil, you can still verify that the transaction it creates is correct on your phone. Unlike Trezor, there is no way to hide evil transactions or the covert transmission of secret keys and this property can be easily verified in many ways by the user. Again, you don't have to trust BitKey, and that's better than having to trust BitKey.
Trezor <- USB -> PC with Internet == Trezor <-> Internet
By comparison using USB to directly connect a device that has access to your private keys to a (presumed hostile) Internet connected PC is inherently less secure because security of my wallet in that use case now depends on the correct behavior of a much more complex arrangement of parts, all of which I have to trust not to malfunction either due to malicious action (e.g., an evil Trezor) or an honest mistake.
Security is inversely proportional to complexity
Relative to QRcodes, the attack surface for USB is large because of the complexity involved in supporting so many possible use cases. A USB device can be a high-speed storage device, a keyboard/mouse, a sound card, a camera, a Bitcoin hardware wallet, etc. It's kind of a security nightmare because you never know what the USB device you are connecting to your computer really does. An innocent looking USB drive can actually be a hacker weapon.
In any case, supporting all of those use cases requires significant complexity at the hardware (chipset) and software (e.g., kernel drivers) level. Congratulations on implementing your own USB stack for Trezor. Maybe its better than using a general purpose USB stack, but it's still not as good as not depending on the USB stack to begin with. Opening your code is a good start but it's no guarantee. As we've seen with the OpenSSL heartbleed issue recently having the source code to security software is not a proof of correctness.
There is no such thing as a perfect implementation
Remember, as long as you're using imperfect components it is safer to assume that if something can go wrong it will. I wouldn't mind using Trezor as another layer of defense (e.g., connected to BitKey) but I think the recommended use case which you are advertising is potentially catastrophic.
The security of any hardware wallet connected by USB to a hostile computer with Internet access is always going to depend on the hardware wallet being a 100% perfect implementation free of inadvertant and deliberate security vulnerabilities. Yikes! Good luck with that. If you pull it off I think you'll be the first ever to manage a perfect implementation of anything without a proof of correctness.Consider expanding into other lines of business because there is great demand for those mad skillz in many areas.
Even if Trezor was perfect that wouldn't be enough
Even if I trusted the Trezor team to be the first to create a perfectly secure implementation of anything, I still have to trust your assembly line, every single source of hardware components, and every single link in the shipping chain.
BitKey can be downright malicious and still be good enough
It's my party and I'll discuss Random Number Generators if I want to
Finally, regarding why the original post discusses Random Number Generators, that was in response to comments made on Reddit, which I basically wholeheartedly agreed with. Maybe you are aware of the issues with trusting RNGs, but judging from the conversations I've had at the Bitcoin meetups it isn't by any means obvious to everyone and that includes some very technically savvy people. Not everyone eats attack trees for lunch my friend. I've had some frustrating conversations with people who are making very foolish assumptions and I don't want that to happen with BitKey.
So basically it's my party and I'll cry if I want to. You would too if it happened to you.
BitKey for your vault, Trezor for your wallet
Hi Tomas,
Sorry for the late reply (I ran out of time last week) and thanks for the detailed, insightful response.
With regards to the mass appeal of something like BitKey: that depends on many things including how easy it is to use. Handling your keys on an air-gapped system will never be as convenient as handling them on a peripheral connected directly to your PC. But... I can tell you that I use this myself and that after you get the hang of it, doing extremely secure air-gapped transactions in the "If I tell you I'll have to kill you" model is pretty easy. The first transaction is the hardest because of the learning curve but after that it is smooth sailing.
You don't have to keep the other machine in the basement. I use an Intel NUC pictured below that sits on my desk with a dumb KVM switch to access my airgap and an old spare phone in airplane mode to verify my transactions. It's actually pretty convenient though of course there is always room for improvement on the usability front: better embedded documentation, a wizard, etc.
Pictured above from left to right: a Raspberry Pi, an Intel NUC, my phone and a pen for scale.
People are already used to giving up convenience for security: let's keep in mind that . Taking cash out of your wallet (or under the mattress) is easier than taking cash out of your local bank branch yet people still keep most of their savings at the local bank. For security reasons mostly.
Bitkey doesn't need mass appeal to be useful: the potential for mass appeal is all speculation. Who knows? We're working on BitKey because we needed something like this ourselves and if anyone else finds it useful then that's awesome. We're not making hardware so we're not really worried about economies of scale. If "lumberjack joe" uses it then great. But if BitKey is useful just to us then that's enough to keep it going. Mass appeal would be a nice stroke for our egos I guess but it doesn't really matter whether everybody or not uses this because this isn't a social application like Facebook. It's a way to create your own virtual vault and you don't need your friends to be on board with that.
If I'm the only user, BitKey is still useful to me. It might even be more useful because the more people use BitKey the harder attackers might try to leverage it in an attack. I mean just imagine if every Bitcoin user in the world is using Trezor in a world in which Bitcoin has become a widely accepted form of currency. It makes attacks against every possible branch in the Trezor attack tree including your production line and shipping companies potentially quite profitable. And if that happens, and people start getting their wallets stolen, they might turn to a solution like a more evolved version of BitKey to store their life savings.
Security is never black and white, it's all about raising the cost of attack: Regarding the points you raised with regards to having to trust BitKey, I agree there is always room for improvement. Security is never black and white. It's about increasing the cost of attack as much as possible and gradually raising the bar by closing off avenues for attacks of ever increasing sophistication and cost. At the end of the day preventing attacks completely is impossible because security also depends on your physical security which will never be perfect. So what we're most concerned about is reducing the likelihood of attacks that will effect a large number of users by compromising central points of failure. Such as a production line or a shipping company.
If you build BitKey from source code yourself that helps alleviate some concerns because inserting complex self replicating malware (e.g., the kind that identifies secret keys and flashes them to your BIOS) into the build process is quite the technical challenge. For extra measure you could go one step further and build TKLDev, TurnKey's self-contained build toolchain from source code, or better yet recreate the build process for BitKey on your own computer using TKLDev as a reference. All the components of the build process are themselves free software and tracked on Git, which makes sneaking bad stuff in much harder.
Even after doing all of that you might decide you still can't trust the RNG in BitKey, at least not if the stakes are high enough, but you should be able to trust it more not to be one giant malware application trying to leak your keys out through the QRcode or flashing them to the BIOS.
Running BitKey on Raspberry Pi will provide even better security: Of course, we can increase security by using hardware such as the Pi in which the BIOS can not be updated and there are fewer (no?) places for state to be secretly saved. If you run BitKey from RAM then you will be able to remove the SD card and run BitKey on the Pi entirely from RAM with no place for an evil BitKey to save any state including your secret keys. That would raise the bar significantly. I realize that on PC hardware there are potentially many more places were state can be saved even on a machine without a hard drive (e.g., NIC firmware, BIOS, hard drive firmware, etc.)
QRcode leakage of keys is mitigated by usage of a separate device for verification: It's true that QRcodes can be encoded in many ways so there is the potential for information lakeage. However that should be less of a concern because you are an independent device (E.g., your phone) to scan the QRcode and convert it plaintext. So even if part of your key was encoded in the QRcode, the attacker still needs to compromise your phone to get at it.
Trezor / USB based wallets useful when the stakes are low: BTW, just for the record I do think USB based hardware wallets like the Trezor are useful and will find their place in the Bitcoin ecosystem. Whether that will be as the ultimate end all of Bitcoin security or not remains to be determined. It's just that when the stakes are high enough you might not want to trust them as much as a more secure arrangement such as the one that can potentially be provided by BitKey. It's like you have some money in your wallet, while keeping most of it in your bank. And if you're really worried, you keep it in a tangible form like gold and drop that into a vault guarded by multiple layers of security. Needing a vault with armed guards to high end security doesn't make the wallet useless for lower risk stuff.
Thanks for the reference to RFC6979: I realized where the confusion regarding the RNG came from. I was focused on the RNG for key generation, you were responding to the usage of the RNG for the signatures (e.g., RFC6979). To be honest I hadn't realized the extent of key leakage during ECDSA signatures so thanks for helping to educate me about that. I'm going to have a closer look at how the software we use in BitKey to sign transactions handles to make sure we don't have an issue.
Trezor Pi accessory board != Trezor Pi implementation: Oh and what I meant regarding a Pi version of Trezor was not a Pi accessory board but an actual Pi based implementation that didn't require any special, possibly "evil" hardware, so that users didn't have to depend on the security of Trezor's production line. An implementation of Trezor that can actually run on open hardware, cheap widely available would be pretty cool but since it runs counter to the Trezor business model of selling devices I'm guessing that it's less likely to happen. Still a good idea though.
The typical PC can not be trusted with financial transactions
With regards to a general purpose computer counting as an independent device:
The security of general purpose personal computer connected to the Internet is so weak that I wouldn't even count them as an independent device when the stakes are high.
These are systems that have a scary attack surface from the enormous complexity required for doing so many things. They have legacy security models that suck something awful and are typically installed, configured and used by someone with no security expertise whatsoever. I'd argue that the typical phone is actually much more tightly locked down than the typical computer. It's also this one small piece of mobile hardware that is easier to secure relative to my desktop workstation.
Case in point, just think of the profit model that supports the development of sophisticated distributed botnets with millions of nodes - fractions of a cent per machine. Imagine what effort would be justified by criminals when there was real money involved. Today's gold standard - patching your machine against known vulnerabilities wouldn't be enough, you'd need to worry about unknown zero day exploits, because the economics would work out.
Regarding banks, credit cards and security:
You make some good points Tomas. I think people store their money for a mix of reasons nowadays, including convenience. We pay for everything with credit cards and the bank provides us with an interface to that system. Though you wouldn't trust the bank unless you knew that it also provided that convenience with multiple levels of security, including depositor insurance that is even backed by the state in most countries. With credit cards too there's a balance between convenience and security. Unauthorized charges can be reversed. People probably wouldn't have gotten into the habit of using credit cards for everything without that.
Also keep in mind that banks predated credit cards by a few hundred years and it's always been a mix of security and convenience. With the first banks the motivation for putting your gold in the bank rather than carrying it around was a mix of security (you couldn't be robbed of your physical gold) and convenience -paper receipts weigh less and can be converted into letters of credit which allows money to be transmitted as information to other bank branches.
You don't need to run faster than the bear, you just need to run faster than the other guy running from the bear:
I agree that as someone that understands more about security I am probably not the best target candidate. Easier to go after someone else. But that assumes I am being individually attacked. The kind of attacks I'm worried about the most are the most profitable kind, the kind that can be "mass produced". Being a security expert doesn't magically give you an advantage unless it changes something at the technical level. If an expert does Bitcoin transactions from his PC connected to the Internet he is placing himself at significant risk. I think the main difference is that an expert is more aware of the risks and would be more reluctant to do that in the first place
Some legitimacy to your claims
They don't actually have a BIOS as such; the GPU firmware fulfils the role that a BIOS/UEFI does normally. Although what you state about the closed nature of the hardware and the bizarre 'GPU boots the CPU' thing are on the money...
However FWIW the firmware binaries are not distributed by Broadcom - they just distribute the specs. There are a few people that know what is in those proprietary binary blobs e.g. the RaspberryPi Foundation make the Linux firmware (binary blob) themselves that they distribute. If you were interested you too could get the inside info. The catch is though that you need to sign their NDA and then aren't able to share the info that you find out (for fear of having your ass sued off...!)
I'm not saying that it's all rosy. And I totally agree that open (open hardware as well as open software) would be much better, but IMO it's not quite as bad as you make it sound.
No firmware source code is not publicly available
You have to sign the NDA with Broadcom and at their discretion will (may) give you the specs of the hardware. You then need to build the code (firmware) to support it yourself. And you are not allowed to publicly distribute the hardware specs or any derivatives of it (i.e. the firmware source code that you develop). You can only distribute the compiled binaries.
So the RaspberryPi Foundation have signed the NDA with Broadcom and use the info that is released to them by Broadcom to design and build the GPU firmware binary blob themselves (as I said above AFAIK only the specs are provided by Broadcom). Due to the NDA they can not distribute anything but the firmware binary blob.
So in theory you too could approach Broadcom, sign their NDA and get a copy of the specs...
Maybe you're right?
TBH I'm not sure why you would have to sign an NDA to get access to a binary firmware blob that is broadly available and legally able to be distributed to third parties already...?! Doesn't make any sense to me... Or maybe that was a special deal being offered to that particular dev (maybe via Eben?)...?
Perhaps you could contact Broadcom and/or RaspberryPi (perhaps even Eben himself?) and find out for sure one way or another...
Mitigating potential attack vectors
With regards to leaking the private key through qrcodes, we mitigate this somewhat by having the qrcode generator as a separate javascript app that you can cut and paste arbitrary information into. If you don't paste your private key into the app, it doesn't have access to it.
Added build info to the bitkey.io website
Added WarpWallet to the latest version of BitKey
As much as I'd like to do all of my computing with 100% open hardware that I fabricated from raw materials, that's not currently an option, so I use a stripped down airgapped mini-PC for the high risk stuff. If my BTC gets stolen, it will probably be after everybody else's has been stolen first.
Unfortunately not... :(
Bitkey currently only builds for x86_64
Pages
Add new comment