Bitcoin : Security vulnerabilities

For devs and advanced users that are still in the dark: Read this to get redpilled about why Bitcoin (SV) is the real Bitcoin

This post by cryptorebel is a great intro for newbies. Here is a continuation for a technical audience. I'll be making edits for readability and maybe even add more content.
The short explanation of why BSV is the real Bitcoin is that it implements the original L1 scripting language, and removes hacks like p2sh. It also removes the block size limit, and yes that leads to a small number of huge nodes. It might not be the system you wanted. Nodes are miners.
The key thing to understand about the UTXO architecture is that it is maximally "sharded" by default. Logically dependent transactions may require linear span to construct, but they can be validated in sublinear span (actually polylogarithmic expected span). Constructing dependent transactions happens out-of-band in any case.
The fact that transactions in a block are merkelized is an obvious sign that Bitcoin was designed for big blocks. But merkle trees are only half the story. UTXOs are essentially hash-addressed stateful continuation snapshots which can also be "merged" (validated) in a tree.
I won't even bother talking about how broken Lightning Network is. Of all the L2 scaling solutions that could have been used with small block sizes, it's almost unbelievable how many bad choices they've made. We should be kind to them and assume it was deliberate sabotage rather than insulting their intelligence.
Segwit is also outside the scope of this post.
However I will briefly hate on p2sh. Imagine seeing a stunted L1 script language, and deciding that the best way to implement multisigs was a soft-fork patch in the form of p2sh. If the intent was truly backwards-compatability with old clients, then by that logic all segwit and p2sh addresses are supposed to only be protected by transient rules outside of the protocol. Explain that to your custody clients.
As far as Bitcoin Cash goes, I was in the camp of "there's still time to save BCH" until not too long ago. Unfortunately the galaxy brains behind BCH have doubled down on their mistakes. Again, it is kinder to assume deliberate sabotage. (As an aside, the fact that they didn't embrace the name "bcash" when it was used to attack them shows how unprepared they are when the real psyops start to hit. Or, again, that the saboteurs controlled the entire back-and-forth.)
The one useful thing that came out of BCH is some progress on L1 apps based on covenants, but the issue is that they are not taking care to ensure every change maintains the asymptotic validation complexity of bitcoin's UTXO.
Besides that, The BCH devs missed something big. So did I.
It's possible to load the entire transaction onto the stack without adding any new opcodes. Read this post for a quick intro on how transaction meta-evaluation leads to stateful smart contract capabilities. Note that it was written before I understood how it was possible in Bitcoin, but the concept is the same. I've switching to developing a language that abstracts this behavior and compiles to bitcoin's L1. (Please don't "told you so" at me if you just blindly trusted nChain but still can't explain how it's done.)
It is true that this does not allow exactly the same class of L1 applications as Ethereum. It only allows those than can be made parallel, those that can delegate synchronization to "userspace". It forces you to be scalable, to process bottlenecks out-of-band at a per-application level.
Now, some of the more diehard supporters might say that Satoshi knew this was possible and meant for it to be this way, but honestly I don't believe that. nChain says they discovered the technique 'several years ago'. OP_PUSH_TX would have been a very simple opcode to include, and it does not change any aspect of validation in any way. The entire transaction is already in the L1 evaluation context for the purpose of checksig, it truly changes nothing.
But here's the thing: it doesn't matter if this was a happy accident. What matters is that it works. It is far more important to keep the continuity of the original protocol spec than to keep making optimizations at the protocol level. In a concatenative language like bitcoin script, optimized clients can recognize "checksig trick phrases" regardless of their location in the script, and treat them like a simple opcode. Script size is not a constraint when you allow the protocol to scale as designed. Think of it as precompiles in EVM.
Now let's address Ethereum. V. Buterin recently wrote a great piece about the concept of credible neutrality. The only way for a blockchain system to achieve credible neutrality and long-term decentralization of power is to lock down the protocol rules. The thing that caused Ethereum to succeed was the yellow paper. Ethereum has outperformed every other smart contract platform because the EVM has clear semantics with many implementations, so people can invest time and resources into applications built on it. The EVM is apolitical, the EVM spec (fixed at any particular version) is truly decentralized. Team Ethereum can plausibly maintain credibility and neutrality as long as they make progress towards the "Serenity" vision they outlined years ago. Unfortunately they have already placed themselves in a precarious position by picking and choosing which catastrophes they intervene on at the protocol level.
But those are social and political issues. The major technical issue facing the EVM is that it is inherently sequential. It does not have the key property that transactions that occur "later" in the block can be validated before the transactions they depend on are validated. Sharding will hit a wall faster than you can say "O(n/64) is O(n)". Ethereum will get a lot of mileage out of L2, but the fundamental overhead of synchronization in L1 will never go away. The best case scaling scenario for ETH is an L2 system with sublinear validation properties like UTXO. If the economic activity on that L2 system grows larger than that of the L1 chain, the system loses key security properties. Ethereum is sequential by default with parallelism enabled by L2, while Bitcoin is parallel by default with synchronization forced into L2.
Finally, what about CSW? I expect soon we will see a lot of people shouting, "it doesn't matter who Satoshi is!", and they're right. The blockchain doesn't care if CSW is Satoshi or not. It really seems like many people's mental model is "Bitcoin (BSV) scales and has smart contracts if CSW==Satoshi". Sorry, but UTXO scales either way. The checksig trick works either way.
Coin Woke.
submitted by -mr-word- to bitcoincashSV [link] [comments]

Technical: The `SIGHASH_NOINPUT` Debate! Chaperones and output tagging and signature replay oh my!

Bitcoin price isn't moving oh no!!! You know WHAT ELSE isn't moving?? SIGHASH_NOINPUT that's what!!!
Now as you should already know, Decker-Russell-Osuntokun ("eltoo") just ain't possible without SIGHASH_NOINPUT of some kind or other. And Decker-Russell-Osuntokun removes the toxic waste problem (i.e. old backups of your Poon-Dryja LN channels are actively dangerous and could lose your funds if you recover from them, or worse, your most hated enemy could acquire copies of your old state and make you lose funds). Decker-Russell-Osuntokun also allows multiparticipant offchain cryptocurrency update systems, without the drawback of a large unilateral close timeout that Decker-Wattenhofer does, making this construction better for use at the channel factory layer.
Now cdecker already wrote a some code implementing SIGHASH_NOINPUT before, which would make it work in current pre-SegWit P2PKH, P2SH, as well as SegWit v0 P2WPKH and P2WSH. He also made and published BIP 118.
But as is usual for Bitcoin Core development, this triggered debate, and thus many counterproposals were made and so on. Suffice it to say that the simple BIP 118 looks like it won't be coming into Bitcoin Core anytime soon (or possibly at all).
First things first: This link contains all that you need to know, but hey, maybe you'll find my take more amusing.
So let's start with the main issue.

Signature Replay Attack

The above is the Signature Replay Attack, and the reason why SIGHASH_NOINPUT has triggered debate as to whether it is safe at all and whether we can add enough stuff to it to ever make it safe.
Now of course you could point to SIGHASH_NONE which is even worse because all it does is say "I am authorizing the spend of this particular coin of this particular value protected by my key" without any further restrictions like which outputs it goes to. But then SIGHASH_NONE is intended to be used to sacrifice your money to the miners, for example if it's a dust attack trying to get you to spend, so you broadcast a SIGHASH_NONE signature and some enterprising miner will go get a bunch of such SIGHASH_NONE signatures and gather up the dust in a transaction that pays to nobody and gets all the funds as fees. And besides; even if we already have something you could do stupid things with, it's not a justification for adding more things you could do stupid things with.
So yes, SIGHASH_NOINPUT makes Bitcoin more powerful. Now, Bitcoin is a strong believer in "Principle of Least Power". So adding more power to Bitcoin via SIGHASH_NOINPUT is a violation of Principle of Least Power, at least to those arguing to add even more limits to SIGHASH_NOINPUT.
I believe nullc is one of those who strongly urges for adding more limits to SIGHASH_NOINPUT, because it distracts him from taking pictures of his autonomous non-human neighbor, a rather handsome gray fox, but also because it could be used as the excuse for the next MtGox, where a large exchange inadvertently pays to SIGHASH_NOINPUT-using addresses and becomes liable/loses track of their funds when signature replay happens.

Output Tagging

Making SIGHASH_NOINPUT safer by not allowing normal addresses use it.
Basically, we have 32 different SegWit versions. The current SegWit addresses are v0, the next version (v1) is likely to be the Schnorr+Taproot+MAST thing.
What output tagging proposes is to limit SegWit version ranges from 0->15 in the bech32 address scheme (instead of 0->31 it currently has). Versions 16 to 31 are then not valid bech32 SegWit addresses and exchanges shouldn't pay to it.
Then, we allow the use of SIGHASH_NOINPUT only for version 16. Version 16 might very well be Schnorr+Taproot+MAST, with a side serving of SIGHASH_NOINPUT.
This is basically output tagging. SIGHASH_NOINPUT can only be used if the output is tagged (by paying to version 16 SegWit) to allow it, and addresses do not allow outputs to be tagged as such, removing the potential liability of large custodial services like exchanges.
Now, Decker-Russell-Osuntokun channels have two options:
The tradeoffs in this case are:
The latter tradeoff is probably what would be taken (because we're willing to pay for privacy) if Bitcoin Core decides in favor of tagged outputs.
Another issue here is --- oops, P2SH-Segwit wrapped addresses. P2SH can be used to wrap any SegWit payment script, including payments to any SegWit version, including v16. So now you can sneak in a SIGHASH_NOINPUT-enabled SegWit v16 inside an ordinary P2SH that wraps a SegWit payment. One easy way to close this is just to disallow P2SH-SegWit from being valid if it's spending to SegWit version >= 16.

Chaperone Signatures

Closing the Signature Replay Attack by adding a chaperone.
Now we can observe that the Signature Replay Attack is possible because only one signature is needed, and that signature allows any coin of appropriate value to be spent.
Adding a chaperone signature simply means requiring that the SCRIPT involved have at least two OP_CHECKSIG operations. If one signature is SIGHASH_NOINPUT, then at least one other signature (the chaperone) validated by the SCRIPT should be SIGHASH_ALL.
This is not so onerous for Decker-Russell-Osuntokun. Both sides can use a MuSig of their keys, to be used for the SIGHASH_NOINPUT signature (so requires both of them to agree on a particular update), then use a shared ECDH key, to be used for the SIGHASH_ALL signature (allows either of them to publish the unilateral close once the update has been agreed upon).
Of course, the simplest thing to do would be for a BOLT spec to say "just use this spec-defined private key k so we can sidestep the Chaperone Signatures thing". That removes the need to coordinate to define a shared ECDH key during channel establishment: just use the spec-indicated key, which is shared to all LN implementations.
But now look at what we've done! We've subverted the supposed solution of Chaperone Signatures, making them effectively not there, because it's just much easier for everyone to use a standard private key for the chaperone signature than to derive a separate new keypair for the Chaperone.
So chaperone signatures aren't much better than just doing SIGHASH_NOINPUT by itself, and you might as well just use SIGHASH_NOINPUT without adding chaperones.
I believe ajtowns is the primary proponent of this proposal.

Toys for the Big Boys

The Signature Replay Attack is Not A Problem (TM).
This position is most strongly held by RustyReddit I believe (he's the Rusty Russell in the Decker-Russell-Osuntokun). As I understand it, he is more willing to not see SIGHASH_NOINPUT enabled, than to have it enabled but with restrictions like Output Tagging or Chaperone Signatures.
Basically, the idea is: don't use SIGHASH_NOINPUT for normal wallets, in much the same way you don't use SIGHASH_NONE for normal wallets. If you want to do address reuse, don't use wallet software made by luke-jr that specifically screws with your ability to do address reuse.
SIGHASH_NOINPUT is a flag for use by responsible, mutually-consenting adults who want to settle down some satoshis and form a channel together. It is not something that immature youngsters should be playing around with, not until they find a channel counterparty that will treat this responsibility properly. And if those immature youngsters playing with their SIGHASH_NOINPUT flags get into trouble and, you know, lose their funds (as fooling around with SIGHASH_NOINPUT is wont to do), well, they need counseling and advice ("not your keys not your coins", "hodl", "SIGHASH_NOINPUT is not a toy, but something special, reserved for those willing to take on the responsibility of making channels according to the words of Decker-Russell-Osuntokun"...).

Conclusion

Dunno yet. It's still being debated! So yeah. SIGHASH_NOINPUT isn't moving, just like Bitcoin's price!!! YAAAAAAAAAAAAAAAAAAA.
submitted by almkglor to Bitcoin [link] [comments]

PSA: Guide on how to recover your lost Segwit coins using Electron Cash

How to get your recovered SegWit funds using Electron Cash

Background

Thousands of BCH on thousands of coins that were accidentally send to Segwit 3xxx addresses were recovered by BTC.TOP in block 582705.
This was a wonderful service to the community. This had to be done quickly as the coins were anyone can spend and needed to be sent somewhere. This all had to be done before thieves could get their dirty paws on them.
So.. How were they recovered? Did BTC.TOP just take the coins for themselves? NO: They were not taken by BTC.TOP. This would be wrong (morally), and would open them up to liability and other shenanigans (legally).
Instead --BTC.TOP acted quickly and did the legally responsible thing with minimal liability. They were sent on to the intended destination address of the SegWit transaction (if translated to BCH normal address).
This means BTC.TOP did not steal your coins and/or does not have custody of your funds!
But this does mean you now need to figure out how to get the private key associated with where they were sent -- in order to unlock the funds. (Which will be covered below).
Discussions on why this was the most responsible thing to do and why it was done this way are available upon request. Or you can search this subreddit to get to them.

Ok, so BTC.TOP doesn't have them -- who does?

You do (if they were sent to you)! Or -- the person / address they were sent to does!

HUH?

The Segwit transactions have a bad/crazy/messed-up format which contains an output (destination) which contains a hash of a public key inside. So they "sort of" contain a regular bitcoin address inside of them, with other Segwit garbage around them. This hash was decoded and translated to a regular BCH address, and the funds were sent there.
Again: The funds were forwarded on to a regular BCH address where they are safe. They are now guarded by a private key -- where they were not before (before they were "anyone can spend"). It can be argued this is the only reasonable thing to have done with them (legally and morally) -- continue to send them to their intended destination. This standard, if it's good enough for the US Post Office and Federal Mail, is good enough here. It's better than them being stolen.

Ok, I get it... they are on a regular BCH address now. The address of the destination of the Tx, is it?

Yes. So now a regular BCH private key (rather than anyone can spend) is needed to spend them further. Thus the Segwit destination address you sent them to initially was effectively translated to a BCH regular address. It's as if you posted a parcel with the wrong ZIP code on it -- but the USPS was nice enough to figure that out and send it to where you intended it to go.

Why do it this way and not return to sender?

Because of the ambiguity present-- it's not entirely clear which sender to return them to. There is too much ambiguity there, and would have led to many inputs not being recovered in a proper manner. More discussion on this is available upon request.

Purpose of this guide

This document explains how to:
Complications to watch out for:

Step 1: Checking where your coins went

To verify if this recovery touched one of your lost coins: look for the transaction that spent your coins and open it on bch.btc.com explorer.

Normal aka "P2PKH"

Let’s take this one for example.
Observe the input says:
P2SH 160014d376cf1baff9eeed943d58551d53c48377adb98c 
And the output says:
P2PKH OP_DUP OP_HASH160 d376cf1baff9eeed943d58551d53c48377adb98c OP_EQUALVERIFY OP_CHECKSIG 
Notice a pattern?
The fact that these two highlighted hexadecimal strings are the same means that the funds were forwarded to the identical public key, and can be spent by the private key (corresponding to that public key) if it is imported into a Bitcoin Cash wallet.

Multisig aka "P2SH"

If the input starts with “P2SH 220020…”, as in this example, then your segwit address is a script -- probably a multisignature. While the input says “P2SH 22002019aa2610492ee2c18605597136294596d4f0f9bc6ce0974ed3a975d65da4ca1e”, the output says “P2SH OP_HASH160 21bdc73fb15b3bb7bd1be365e92447dc2a44e662 OP_EQUAL”. These two strings actually correspond to the same script, but they are different in content and length due to segwit’s design. However, you just need to RIPEMD160 hash the first string and compare to the second -- you can check this by entering the input string (after the 220020 part) into this website’s Binary Hash field and checking the resulting RIPEMD160 hash. The resulting hash is 21bdc73fb15b3bb7bd1be365e92447dc2a44e662, which corresponds to the output hex above, and this means the coins were forwarded to the same spending script but in "non-segwit form". You will need to re-assemble the same multi-signature setup and enough private keys on a Bitcoin Cash wallet. (Sorry for the succinct explanation here. Ask in the comments for more details perhaps.)

No match -- what?!

If the string does not match (identically in the Normal case above, or after properly hashing in the Multisig case above), then your coins were sent elsewhere, possibly even taken by an anonymous miner. :'(

Step 2: How To Do the Recovery

Recover "Normal" address transactions (P2PKH above)

This is for recoveries where the input string started with “160014”.
Option 1 (BIP39 seed):
Option 2 (single key):
Option 3 (xprv -- many keys):
Code:
mkey = "yprvAJ48Yvx71CKa6a6P8Sk78nkSF7iqqaRob1FN7Jxsqm3L52K8XmZ7EtEzPzTUWXAaHNfN4DFAuP4cdM38yrE6j3YifV8i954hyD5rhPyUNVP" from electroncash.bitcoin import DecodeBase58Check, EncodeBase58Check EncodeBase58Check(b'\x04\x88\xad\xe4'+DecodeBase58Check(mkey)[4:]) 
Option 4 (hardware wallet):

How to Recover Multisignature wallets (P2WSH-in-P2SH in segwit parlance)

This is for recoveries where the input string started with "220020.
Please read the above instructions for how to import single keys. You will need to do similar but taking care to reproduce the same set of multisignature keys as you had in the BTC wallet. Note that Electron Cash does not support single-key multisignature, so you need to use the BIP39 / xprv approach.
If you don’t observe the correct address in Electron Cash, then check the list of public keys by right clicking on an address, and compare it to the list seen in your BTC wallet. Also ensure that the number of required signers is identical.
submitted by NilacTheGrim to btc [link] [comments]

Why CHECKDATASIG Does Not Matter

Why CHECKDATASIG Does Not Matter

In this post, I will prove that the two main arguments against the new CHECKDATASIG (CDS) op-codes are invalid. And I will prove that two common arguments for CDS are invalid as well. The proof requires only one assumption (which I believe will be true if we continue to reactive old op-codes and increase the limits on script and transaction sizes [something that seems to have universal support]):
ASSUMPTION 1. It is possible to emmulate CDS with a big long raw script.

Why are the arguments against CDS invalid?

Easy. Let's analyse the two arguments I hear most often against CDS:

ARG #1. CDS can be used for illegal gambling.

This is not a valid reason to oppose CDS because it is a red herring. By Assumption 1, the functionality of CDS can be emulated with a big long raw script. CDS would not then affect what is or is not possible in terms of illegal gambling.

ARG #2. CDS is a subsidy that changes the economic incentives of bitcoin.

The reasoning here is that being able to accomplish in a single op-code, what instead would require a big long raw script, makes transactions that use the new op-code unfairly cheap. We can shoot this argument down from three directions:
(A) Miners can charge any fee they want.
It is true that today miners typically charge transaction fees based on the number of bytes required to express the transaction, and it is also true that a transaction with CDS could be expressed with fewer bytes than the same transaction constructed with a big long raw script. But these two facts don't matter because every miner is free to charge any fee he wants for including a transaction in his block. If a miner wants to charge more for transactions with CDS he can (e.g., maybe the miner believes such transactions cost him more CPU cycles and so he wants to be compensated with higher fees). Similarly, if a miner wants to discount the big long raw scripts used to emmulate CDS he could do that too (e.g., maybe a group of miners have built efficient ways to propagate and process these huge scripts and now want to give a discount to encourage their use). The important point is that the existence of CDS does not impeded the free market's ability to set efficient prices for transactions in any way.
(B) Larger raw transactions do not imply increased orphaning risk.
Some people might argue that my discussion above was flawed because it didn't account for orphaning risk due to the larger transaction size when using a big long raw script compared to a single op-code. But transaction size is not what drives orphaning risk. What drives orphaning risk is the amount of information (entropy) that must be communicated to reconcile the list of transactions in the next block. If the raw-script version of CDS were popular enough to matter, then transactions containing it could be compressed as
....CDS'(signature, message, public-key)....
where CDS' is a code* that means "reconstruct this big long script operation that implements CDS." Thus there is little if any fundamental difference in terms of orphaning risk (or bandwidth) between using a big long script or a single discrete op code.
(C) More op-codes does not imply more CPU cycles.
Firstly, all op-codes are not equal. OP_1ADD (adding 1 to the input) requires vastly fewer CPU cycles than OP_CHECKSIG (checking an ECDSA signature). Secondly, if CDS were popular enough to matter, then whatever "optimized" version that could be created for the discrete CDS op-codes could be used for the big long version emmulating it in raw script. If this is not obvious, realize that all that matters is that the output of both functions (the discrete op-code and the big long script version) must be identical for all inputs, which means that is does NOT matter how the computations are done internally by the miner.

Why are (some of) the arguments for CDS invalid?

Let's go through two of the arguments:

ARG #3. It makes new useful bitcoin transactions possible (e.g., forfeit transactions).

If Assumption 1 holds, then this is false because CDS can be emmulated with a big long raw script. Nothing that isn't possible becomes possible.

ARG #4. It is more efficient to do things with a single op-code than a big long script.

This is basically Argument #2 in reverse. Argument #2 was that CDS would be too efficient and change the incentives of bitcoin. I then showed how, at least at the fundamental level, there is little difference in efficiency in terms of orphaning risk, bandwidth or CPU cycles. For the same reason that Argument #2 is invalid, Argument #4 is invalid as well. (That said, I think a weaker argument could be made that a good scripting language allows one to do the things he wants to do in the simplest and most intuitive ways and so if CDS is indeed useful then I think it makes sense to implement in compact form, but IMO this is really more of an aesthetics thing than something fundamental.)
It's interesting that both sides make the same main points, yet argue in the opposite directions.
Argument #1 and #3 can both be simplified to "CDS permits new functionality." This is transformed into an argument against CDS by extending it with "...and something bad becomes possible that wasn't possible before and so we shouldn't do it." Conversely, it is transformed to an argument for CDS by extending it with "...and something good becomes possible that was not possible before and so we should do it." But if Assumption 1 holds, then "CDS permits new functionality" is false and both arguments are invalid.
Similarly, Arguments #2 and #4 can both be simplified to "CDS is more efficient than using a big long raw script to do the same thing." This is transformed into an argument against CDS by tacking on the speculation that "...which is a subsidy for certain transactions which will throw off the delicate balance of incentives in bitcoin!!1!." It is transformed into an argument for CDS because "... heck, who doesn't want to make bitcoin more efficient!"

What do I think?

If I were the emperor of bitcoin I would probably include CDS because people are already excited to use it, the work is already done to implement it, and the plan to roll it out appears to have strong community support. The work to emulate CDS with a big long raw script is not done.
Moving forward, I think Andrew Stone's (thezerg1) approach outlined here is an excellent way to make incremental improvements to Bitcoin's scripting language. In fact, after writing this essay, I think I've sort of just expressed Andrew's idea in a different form.
* you might call it an "op code" teehee
submitted by Peter__R to btc [link] [comments]

Crypto Telegram Groups

Cryptocurrencies:
@AelfBlockchain - ELF@Aeternity - AE@ArdorPlatform - ARDR@ArkEcosystem - ARK@AugurProject - REP@BATProject - BAT@BeamPrivacy - BEAM@LetsLiveBela - BELA@BitbayOfficial - BCN@Bitcoin - BTC@BitcoinCore - BTC@BitcoinCashFork - BCH@BitcoinGoldHQ - BTG@Bitshares_Community - BTS@BSVChat - BSV@BTTBitTorrent - BTT@BytecoinChat - BCN@BytomInternational - BTM@CallistoNet - CLO@CardanoGeneral - ADA@CentralityOfficialTelegram CENNZ@CloakProject - CLOAK@ChainLinkOfficial - LINK@CosmosProject - ATOM@Counterparty_XCP - XCP@CryptoComOfficial - MCO@CyberMilesToken - CMT@Dash_Chat - DASH@Decred - DCR@Dfinity - DFN@DigiBytecoin - DGB@DigixDAO - DGD@TheDogeHouse - DOGE@Electracoin - ECA@Emercoin_Official - EMC@EnigmaProject - ENG@EOSProject - EOS@EthClassic - ETC@Ether - ETH@EUNOofficial - EUNO@Everipedia - IQ@FactomFCT - FCT@Filecoin - FIL@GnosisPM - GNO@Grincoin - GRIN@Groestl - GRS@Hyperledger@IOTAtangle - IOTA@KomodoPlatform_Official - KMD@KyberNetwork - KNC@LAToken - LA@Litecoin - LTC@MaidSafeCoin - MAID@MakerDAOOfficial - MKR@Monero - XMR@Namecoin - NMC@Navcoin - NAV@Nemred - XEM@Neo_Blockchain - NEO@NervaXNV - XNV@Nimiq - NIM@NxtCommunity - NXT@OmiseGo - OMG@OmniLayer - OMNI@OntologyNetwork - ONT@Peercoin - PPC@PolymathNetwork - POLY@QtumOfficial - QTUM@RavencoinDev - RVN@Ripple - XRP@RSKOfficialCommunity - RIF@Siacoin - SIA@SirinLabs - SRN@Sonm_Eng - SNM@StellarLumens - XLM@StratisPlatform - STRAT@TezosPlatform - XTZ@TronNetworkEN - TRX@UnobtaniumUNO - UNO@Vechain_Official_English - VET@VertcoinCrypto - VTC@Viacoin - VIA@ViberateOfficial - VIB@VSYSOfficialGroup - VSYS@WavesCommunity - WAVES@ZB_English - ZB@ZCashco - ZEC@ZClassicCoin - ZCL@ZCoinProject - XZC

Crypto Communities:
@Aetrader - EN@Allemaalrijk - NL@Altcoins - EN @ArgenPool - ES@AussieCrypto - EN @UKBitcoin - EN@Binarydotcom - EN@BitcoinChat - EN @BitcoinInvestimento - PT @BitNovosti - RU @BitUniverse - EN@BlockhcainMinersGroup - EN@BsodPool - RU@BTCFinland - EN@BTChat - RU@BullBearr - EN@CoinFarm - EN @CoinGecko - EN @CoinMarketCap - EN @CoinPaprika - EN @CrypticIndia - EN@Crypto_CN - ZH @Crypto_ON - RU @CryptoAdvisorOfficial - EN@CryptoAquarium - EN @CryptoBeats - EN @CryptoBoerderij - NL @CryptoCharity - EN@CryptoCoinClub - EN@CryptoExpo_Moscow - RU@CryptoGene - EN@CryptoGifs - EN @CryptoGurusOfficial - EN@CryptoInsidersLobby -@CryptoHispanonet - ES@CryptoMining - EN @CryptoMondayDE - DE@CryptoOnMining - RU@CryptoRomania - RO @CryptoTipsChatFR - EN@CrypVision - DE @ElijaBoomC - EN@FanaticosCriptos - PT @ICOCountdown - EN@ILoveNina - EN@IndiaBits - EN @Kampungkoin - EN@KriptoTurkiye - TR @KryptoCoinsDE - DE@KryptoDETrading - DE @KryptoVerSteuerung - DE@MinerSpeak - EN@MiningBazar - RU @MMCryptoENG - EN@RepublicCrypto - EN@SideChains - EN@SmartContracts - EN@SportsBet - EN @StrapeCharts - EN@TamilBTC - EN@TheCoinFarm - EN @TheCryptoMob - EN@TokenMarket - EN@TrezorTalk - EN@Trollbox - EN @WCSETalks - EN@WhaleClub - EN (Invite Only)@WhaleClubClassRoom - EN@WhalePoolBTC - EN @WhaleTankChat - EN@XMRMine - EN
Crypto News Channels:
@AltCoin - EN@Avalbit - EN@Bit_Novosti - RU @BitcoinBravado - EN @BitcoinChannel - EN@BitcoinExchangeGuide - EN@BitcoinMagazinebot - EN @BitOracle - RU@BitRu - RU@CDiamonds - EN@Coin_Analyse - DE@CoinCentral - EN@CoinDesk - EN @CoinGape - EN@CoinNewsChannel - EN@CoinNewsDE - DE@CoinTelegraph - EN @Cointified - EN @Cripto247 - ES@CriptoNoticias - ES @Crypto_News_Channel - EN@CryptoAlerts - EN @CryptoAMB - EN@CryptoAsiaNews - ZH @CryptoChan - RU@CryptoClubAlerts - EN@CryptoCurrency - EN@CryptoExplorerChannel - EN@CryptoMartez - EN@CryptoNinja_News - EN@CryptoNyka - RU@CryptoRankNews - EN @CryptoSentinel - EN@CryptoSeson2020 - EN@CryptoSlateNews - EN@CryptoSnippets - EN@DecentralBox - DE @ForkLog - RU @JingBao - ZH @Krepta_News - RU@Krypto_Deutschland - DE@KryptoNachrichten - DE@NewCryptoJournal - EN@MGonCrypto - EN @OneMinuteLetter - EN@RichardsCalls - EN@SmartLiquidNews - EN@TheBCJ - RU @TONorg - EN @Unfolded - EN @WhalebotAlerts - EN @Xblockchain - FA

Trading Analysis:
@Altchica - EN@AltcoinWhales - EN @AnhemTrader - VI@Bitafta - EN@CacheStation - EN @Checksig - EN @CryptoCharters - EN @CryptoCredTA - EN @CryptoInMinutes - EN@CryptoScanner100Eyes - EN @ExcavoChannel - EN@KXiantu - ZH @Pierre_Crypt0_Public - EN@PsychoChromatic - EN @SalsaTekila - EN@ScalpingMF - EN@T45Investments - RU@TASmartAlerts - EN@TheLionDen - EN@TraderMillClub - EN@WCSEChannel - EN@WCSERussia - RU@WhaleTank - EN@WinterWolvesTA - EN @WolfPackSignals - EN
Indicator Bots:
@Crypto_Scanner - EN@CryptoQuantBotChannel - EN@BitmexRekts - EN@BounceBotBin - EN@Coin_Pulse - EN@Coin_Pulse_Listing - EN@CoinTrendz - EN @CryptoChan_HighLowPulse - EN @DataLightMe - EN@WallMonitor - EN @Whale_Alert_io - EN @WhaleCalls - EN @WhalepoolBTCFeed - EN @WhaleSniper - EN
submitted by Aztek_btc to cryptogroups [link] [comments]

Transcript of the community Q&A with Steve Shadders and Daniel Connolly of the Bitcoin SV development team. We talk about the path to big blocks, new opcodes, selfish mining, malleability, and why November will lead to a divergence in consensus rules. (Cont in comments)

We've gone through the painstaking process of transcribing the linked interview with Steve Shadders and Daniell Connolly of the Bitcoin SV team. There is an amazing amount of information in this interview that we feel is important for businesses and miners to hear, so we believe it was important to get this is a written form. To avoid any bias, the transcript is taken almost word for word from the video, with just a few changes made for easier reading. If you see any corrections that need to be made, please let us know.
Each question is in bold, and each question and response is timestamped accordingly. You can follow along with the video here:
https://youtu.be/tPImTXFb_U8

BEGIN TRANSCRIPT:

Connor: 02:19.68,0:02:45.10
Alright so thank You Daniel and Steve for joining us. We're joined by Steve Shadders and Daniel Connolly from nChain and also the lead developers of the Satoshi’s Vision client. So Daniel and Steve do you guys just want to introduce yourselves before we kind of get started here - who are you guys and how did you get started?
Steve: 0,0:02:38.83,0:03:30.61
So I'm Steve Shadders and at nChain I am the director of solutions in engineering and specifically for Bitcoin SV I am the technical director of the project which means that I'm a bit less hands-on than Daniel but I handle a lot of the liaison with the miners - that's the conditional project.
Daniel:
Hi I’m Daniel I’m the lead developer for Bitcoin SV. As the team's grown that means that I do less actual coding myself but more organizing the team and organizing what we’re working on.
Connor 03:23.07,0:04:15.98
Great so we took some questions - we asked on Reddit to have people come and post their questions. We tried to take as many of those as we could and eliminate some of the duplicates, so we're gonna kind of go through each question one by one. We added some questions of our own in and we'll try and get through most of these if we can. So I think we just wanted to start out and ask, you know, Bitcoin Cash is a little bit over a year old now. Bitcoin itself is ten years old but in the past a little over a year now what has the process been like for you guys working with the multiple development teams and, you know, why is it important that the Satoshi’s vision client exists today?
Steve: 0:04:17.66,0:06:03.46
I mean yes well we’ve been in touch with the developer teams for quite some time - I think a bi-weekly meeting of Bitcoin Cash developers across all implementations started around November last year. I myself joined those in January or February of this year and Daniel a few months later. So we communicate with all of those teams and I think, you know, it's not been without its challenges. It's well known that there's a lot of disagreements around it, but some what I do look forward to in the near future is a day when the consensus issues themselves are all rather settled, and if we get to that point then there's not going to be much reason for the different developer teams to disagree on stuff. They might disagree on non-consensus related stuff but that's not the end of the world because, you know, Bitcoin Unlimited is free to go and implement whatever they want in the back end of a Bitcoin Unlimited and Bitcoin SV is free to do whatever they want in the backend, and if they interoperate on a non-consensus level great. If they don't not such a big problem there will obviously be bridges between the two, so, yeah I think going forward the complications of having so many personalities with wildly different ideas are going to get less and less.
Cory: 0:06:00.59,0:06:19.59
I guess moving forward now another question about the testnet - a lot of people on Reddit have been asking what the testing process for Bitcoin SV has been like, and if you guys plan on releasing any of those results from the testing?
Daniel: 0:06:19.59,0:07:55.55
Sure yeah so our release will be concentrated on the stability, right, with the first release of Bitcoin SV and that involved doing a large amount of additional testing particularly not so much at the unit test level but at the more system test so setting up test networks, performing tests, and making sure that the software behaved as we expected, right. Confirming the changes we made, making sure that there aren’t any other side effects. Because of, you know, it was quite a rush to release the first version so we've got our test results documented, but not in a way that we can really release them. We're thinking about doing that but we’re not there yet.
Steve: 0:07:50.25,0:09:50.87
Just to tidy that up - we've spent a lot of our time developing really robust test processes and the reporting is something that we can read on our internal systems easily, but we need to tidy that up to give it out for public release. The priority for us was making sure that the software was safe to use. We've established a test framework that involves a progression of code changes through multiple test environments - I think it's five different test environments before it gets the QA stamp of approval - and as for the question about the testnet, yeah, we've got four of them. We've got Testnet One and Testnet Two. A slightly different numbering scheme to the testnet three that everyone's probably used to – that’s just how we reference them internally. They're [1 and 2] both forks of Testnet Three. [Testnet] One we used for activation testing, so we would test things before and after activation - that one’s set to reset every couple of days. The other one [Testnet Two] was set to post activation so that we can test all of the consensus changes. The third one was a performance test network which I think most people have probably have heard us refer to before as Gigablock Testnet. I get my tongue tied every time I try to say that word so I've started calling it the Performance test network and I think we're planning on having two of those: one that we can just do our own stuff with and experiment without having to worry about external unknown factors going on and having other people joining it and doing stuff that we don't know about that affects our ability to baseline performance tests, but the other one (which I think might still be a work in progress so Daniel might be able to answer that one) is one of them where basically everyone will be able to join and they can try and mess stuff up as bad as they want.
Daniel: 0:09:45.02,0:10:20.93
Yeah, so we so we recently shared the details of Testnet One and Two with the with the other BCH developer groups. The Gigablock test network we've shared up with one group so far but yeah we're building it as Steve pointed out to be publicly accessible.
Connor: 0:10:18.88,0:10:44.00
I think that was my next question I saw that you posted on Twitter about the revived Gigablock testnet initiative and so it looked like blocks bigger than 32 megabytes were being mined and propagated there, but maybe the block explorers themselves were coming down - what does that revived Gigablock test initiative look like?
Daniel: 0:10:41.62,0:11:58.34
That's what did the Gigablock test network is. So the Gigablock test network was first set up by Bitcoin Unlimited with nChain’s help and they did some great work on that, and we wanted to revive it. So we wanted to bring it back and do some large-scale testing on it. It's a flexible network - at one point we had we had eight different large nodes spread across the globe, sort of mirroring the old one. Right now we scaled back because we're not using it at the moment so they'll notice I think three. We have produced some large blocks there and it's helped us a lot in our research and into the scaling capabilities of Bitcoin SV, so it's guided the work that the team’s been doing for the last month or two on the improvements that we need for scalability.
Steve: 0:11:56.48,0:13:34.25
I think that's actually a good point to kind of frame where our priorities have been in kind of two separate stages. I think, as Daniel mentioned before, because of the time constraints we kept the change set for the October 15 release as minimal as possible - it was just the consensus changes. We didn't do any work on performance at all and we put all our focus and energy into establishing the QA process and making sure that that change was safe and that was a good process for us to go through. It highlighted what we were missing in our team – we got our recruiters very busy recruiting of a Test Manager and more QA people. The second stage after that is performance related work which, as Daniel mentioned, the results of our performance testing fed into what tasks we were gonna start working on for the performance related stuff. Now that work is still in progress - some of the items that we identified the code is done and that's going through the QA process but it’s not quite there yet. That's basically the two-stage process that we've been through so far. We have a roadmap that goes further into the future that outlines more stuff, but primarily it’s been QA first, performance second. The performance enhancements are close and on the horizon but some of that work should be ongoing for quite some time.
Daniel: 0:13:37.49,0:14:35.14
Some of the changes we need for the performance are really quite large and really get down into the base level view of the software. There's kind of two groups of them mainly. One that are internal to the software – to Bitcoin SV itself - improving the way it works inside. And then there's other ones that interface it with the outside world. One of those in particular we're working closely with another group to make a compatible change - it's not consensus changing or anything like that - but having the same interface on multiple different implementations will be very helpful right, so we're working closely with them to make improvements for scalability.
Connor: 0:14:32.60,0:15:26.45
Obviously for Bitcoin SV one of the main things that you guys wanted to do that that some of the other developer groups weren't willing to do right now is to increase the maximum default block size to 128 megabytes. I kind of wanted to pick your brains a little bit about - a lot of the objection to either removing the box size entirely or increasing it on a larger scale is this idea of like the infinite block attack right and that kind of came through in a lot of the questions. What are your thoughts on the “infinite block attack” and is it is it something that that really exists, is it something that miners themselves should be more proactive on preventing, or I guess what are your thoughts on that attack that everyone says will happen if you uncap the block size?
Steve: 0:15:23.45,0:18:28.56
I'm often quoted on Twitter and Reddit - I've said before the infinite block attack is bullshit. Now, that's a statement that I suppose is easy to take out of context, but I think the 128 MB limit is something where there’s probably two schools of thought about. There are some people who think that you shouldn't increase the limit to 128 MB until the software can handle it, and there are others who think that it's fine to do it now so that the limit is increased when the software can handle it and you don’t run into the limit when this when the software improves and can handle it. Obviously we’re from the latter school of thought. As I said before we've got a bunch of performance increases, performance enhancements, in the pipeline. If we wait till May to increase the block size limit to 128 MB then those performance enhancements will go in, but we won't be able to actually demonstrate it on mainnet. As for the infinitive block attack itself, I mean there are a number of mitigations that you can put in place. I mean firstly, you know, going down to a bit of the tech detail - when you send a block message or send any peer to peer message there's a header which has the size of the message. If someone says they're sending you a 30MB message and you're receiving it and it gets to 33MB then obviously you know something's wrong so you can drop the connection. If someone sends you a message that's 129 MB and you know the block size limit is 128 you know it’s kind of pointless to download that message. So I mean these are just some of the mitigations that you can put in place. When I say the attack is bullshit, I mean I mean it is bullshit from the sense that it's really quite trivial to prevent it from happening. I think there is a bit of a school of thought in the Bitcoin world that if it's not in the software right now then it kind of doesn't exist. I disagree with that, because there are small changes that can be made to work around problems like this. One other aspect of the infinite block attack, and let’s not call it the infinite block attack, let's just call it the large block attack - it takes a lot of time to validate that we gotten around by having parallel pipelines for blocks to come in, so you've got a block that's coming in it's got a unknown stuck on it for two hours or whatever downloading and validating it. At some point another block is going to get mined b someone else and as long as those two blocks aren't stuck in a serial pipeline then you know the problem kind of goes away.
Cory: 0:18:26.55,0:18:48.27
Are there any concerns with the propagation of those larger blocks? Because there's a lot of questions around you know what the practical size of scaling right now Bitcoin SV could do and the concerns around propagating those blocks across the whole network.
Steve 0:18:45.84,0:21:37.73
Yes, there have been concerns raised about it. I think what people forget is that compact blocks and xThin exist, so if a 32MB block is not send 32MB of data in most cases, almost all cases. The concern here that I think I do find legitimate is the Great Firewall of China. Very early on in Bitcoin SV we started talking with miners on the other side of the firewall and that was one of their primary concerns. We had anecdotal reports of people who were having trouble getting a stable connection any faster than 200 kilobits per second and even with compact blocks you still need to get the transactions across the firewall. So we've done a lot of research into that - we tested our own links across the firewall, rather CoinGeeks links across the firewall as they’ve given us access to some of their servers so that we can play around, and we were able to get sustained rates of 50 to 90 megabits per second which pushes that problem quite a long way down the road into the future. I don't know the maths off the top of my head, but the size of the blocks that can sustain is pretty large. So we're looking at a couple of options - it may well be the chattiness of the peer-to-peer protocol causes some of these issues with the Great Firewall, so we have someone building a bridge concept/tool where you basically just have one kind of TX vacuum on either side of the firewall that collects them all up and sends them off every one or two seconds as a single big chunk to eliminate some of that chattiness. The other is we're looking at building a multiplexer that will sit and send stuff up to the peer-to-peer network on one side and send it over splitters, to send it over multiple links, reassemble it on the other side so we can sort of transition the great Firewall without too much trouble, but I mean getting back to the core of your question - yes there is a theoretical limit to block size propagation time and that's kind of where Moore's Law comes in. Putting faster links and you kick that can further down the road and you just keep on putting in faster links. I don't think 128 main blocks are going to be an issue though with the speed of the internet that we have nowadays.
Connor: 0:21:34.99,0:22:17.84
One of the other changes that you guys are introducing is increasing the max script size so I think right now it’s going from 201 to 500 [opcodes]. So I guess a few of the questions we got was I guess #1 like why not uncap it entirely - I think you guys said you ran into some concerns while testing that - and then #2 also specifically we had a question about how certain are you that there are no remaining n squared bugs or vulnerabilities left in script execution?
Steve: 0:22:15.50,0:25:36.79
It's interesting the decision - we were initially planning on removing that cap altogether and the next cap that comes into play after that (next effective cap is a 10,000 byte limit on the size of the script). We took a more conservative route and decided to wind that back to 500 - it's interesting that we got some criticism for that when the primary criticism I think that was leveled against us was it’s dangerous to increase that limit to unlimited. We did that because we’re being conservative. We did some research into these log n squared bugs, sorry – attacks, that people have referred to. We identified a few of them and we had a hard think about it and thought - look if we can find this many in a short time we can fix them all (the whack-a-mole approach) but it does suggest that there may well be more unknown ones. So we thought about putting, you know, taking the whack-a-mole approach, but that doesn't really give us any certainty. We will fix all of those individually but a more global approach is to make sure that if anyone does discover one of these scripts it doesn't bring the node to a screaming halt, so the problem here is because the Bitcoin node is essentially single-threaded, if you get one of these scripts that locks up the script engine for a long time everything that's behind it in the queue has to stop and wait. So what we wanted to do, and this is something we've got an engineer actively working on right now, is once that script validation goad path is properly paralyzed (parts of it already are), then we’ll basically assign a few threads for well-known transaction templates, and a few threads for any any type of script. So if you get a few scripts that are nasty and lock up a thread for a while that's not going to stop the node from working because you've got these other kind of lanes of the highway that are exclusively reserved for well-known script templates and they'll just keep on passing through. Once you've got that in place, and I think we're in a much better position to get rid of that limit entirely because the worst that's going to happen is your non-standard script pipelines get clogged up but everything else will keep keep ticking along - there are other mitigations for this as well I mean I know you could always put a time limit on script execution if they wanted to, and that would be something that would be up to individual miners. Bitcoin SV's job I think is to provide the tools for the miners and the miners can then choose, you know, how to make use of them - if they want to set time limits on script execution then that's a choice for them.
Daniel: 0:25:34.82,0:26:15.85
Yeah, I'd like to point out that a node here, when it receives a transaction through the peer to peer network, it doesn't have to accept that transaction, you can reject it. If it looks suspicious to the node it can just say you know we're not going to deal with that, or if it takes more than five minutes to execute, or more than a minute even, it can just abort and discard that transaction, right. The only time we can’t do that is when it's in a block already, but then it could decide to reject the block as well. It's all possibilities there could be in the software.
Steve: 0:26:13.08,0:26:20.64
Yeah, and if it's in a block already it means someone else was able to validate it so…
Cory: 0,0:26:21.21,0:26:43.60
There’s a lot of discussions about the re-enabled opcodes coming – OP_MUL, OP_INVERT, OP_LSHIFT, and OP_RSHIFT up invert op l shift and op r shift you maybe explain the significance of those op codes being re-enabled?
Steve: 0:26:42.01,0:28:17.01
Well I mean one of one of the most significant things is other than two, which are minor variants of DUP and MUL, they represent almost the complete set of original op codes. I think that's not necessarily a technical issue, but it's an important milestone. MUL is one that's that I've heard some interesting comments about. People ask me why are you putting OP_MUL back in if you're planning on changing them to big number operations instead of the 32-bit limit that they're currently imposed upon. The simple answer to that question is that we currently have all of the other arithmetic operations except for OP_MUL. We’ve got add divide, subtract, modulo – it’s odd to have a script system that's got all the mathematical primitives except for multiplication. The other answer to that question is that they're useful - we've talked about a Rabin signature solution that basically replicates the function of DATASIGVERIFY. That's just one example of a use case for this - most cryptographic primitive operations require mathematical operations and bit shifts are useful for a whole ton of things. So it's really just about completing that work and completing the script engine, or rather not completing it, but putting it back the way that it was it was meant to be.
Connor 0:28:20.42,0:29:22.62
Big Num vs 32 Bit. I've seen Daniel - I think I saw you answer this on Reddit a little while ago, but the new op codes using logical shifts and Satoshi’s version use arithmetic shifts - the general question that I think a lot of people keep bringing up is, maybe in a rhetorical way but they say why not restore it back to the way Satoshi had it exactly - what are the benefits of changing it now to operate a little bit differently?
Daniel: 0:29:18.75,0:31:12.15
Yeah there's two parts there - the big number one and the L shift being a logical shift instead of arithmetic. so when we re-enabled these opcodes we've looked at them carefully and have adjusted them slightly as we did in the past with OP_SPLIT. So the new LSHIFT and RSHIFT are bitwise operators. They can be used to implement arithmetic based shifts - I think I've posted a short script that did that, but we can't do it the other way around, right. You couldn't use an arithmetic shift operator to implement a bitwise one. It's because of the ordering of the bytes in the arithmetic values, so the values that represent numbers. The little endian which means they're swapped around to what many other systems - what I've considered normal - or big-endian. And if you start shifting that properly as a number then then shifting sequence in the bytes is a bit strange, so it couldn't go the other way around - you couldn't implement bitwise shift with arithmetic, so we chose to make them bitwise operators - that's what we proposed.
Steve: 0:31:10.57,0:31:51.51
That was essentially a decision that was actually made in May, or rather a consequence of decisions that were made in May. So in May we reintroduced OP_AND, OP_OR, and OP_XOR, and that was also another decision to replace three different string operators with OP_SPLIT was also made. So that was not a decision that we've made unilaterally, it was a decision that was made collectively with all of the BCH developers - well not all of them were actually in all of the meetings, but they were all invited.
Daniel: 0:31:48.24,0:32:23.13
Another example of that is that we originally proposed OP_2DIV and OP_2MUL was it, I think, and this is a single operator that multiplies the value by two, right, but it was pointed out that that can very easily be achieved by just doing multiply by two instead of having a separate operator for it, so we scrapped those, we took them back out, because we wanted to keep the number of operators minimum yeah.
Steve: 0:32:17.59,0:33:47.20
There was an appetite around for keeping the operators minimal. I mean the decision about the idea to replace OP_SUBSTR, OP_LEFT, OP_RIGHT with OP_SPLIT operator actually came from Gavin Andresen. He made a brief appearance in the Telegram workgroups while we were working out what to do with May opcodes and obviously Gavin's word kind of carries a lot of weight and we listen to him. But because we had chosen to implement the May opcodes (the bitwise opcodes) and treat the data as big-endian data streams (well, sorry big-endian not really applicable just plain data strings) it would have been completely inconsistent to implement LSHIFT and RSHIFT as integer operators because then you would have had a set of bitwise operators that operated on two different kinds of data, which would have just been nonsensical and very difficult for anyone to work with, so yeah. I mean it's a bit like P2SH - it wasn't a part of the original Satoshi protocol that once some things are done they're done and you know if you want to want to make forward progress you've got to work within that that framework that exists.
Daniel: 0:33:45.85,0:34:48.97
When we get to the big number ones then it gets really complicated, you know, number implementations because then you can't change the behavior of the existing opcodes, and I don't mean OP_MUL, I mean the other ones that have been there for a while. You can't suddenly make them big number ones without seriously looking at what scripts there might be out there and the impact of that change on those existing scripts, right. The other the other point is you don't know what scripts are out there because of P2SH - there could be scripts that you don't know the content of and you don't know what effect changing the behavior of these operators would mean. The big number thing is tricky, so another option might be, yeah, I don't know what the options for though it needs some serious thought.
Steve: 0:34:43.27,0:35:24.23
That’s something we've reached out to the other implementation teams about - actually really would like their input on the best ways to go about restoring big number operations. It has to be done extremely carefully and I don't know if we'll get there by May next year, or when, but we’re certainly willing to put a lot of resources into it and we're more than happy to work with BU or XT or whoever wants to work with us on getting that done and getting it done safely.
Connor: 0:35:19.30,0:35:57.49
Kind of along this similar vein, you know, Bitcoin Core introduced this concept of standard scripts, right - standard and non-standard scripts. I had pretty interesting conversation with Clemens Ley about use cases for “non-standard scripts” as they're called. I know at least one developer on Bitcoin ABC is very hesitant, or kind of pushed back on him about doing that and so what are your thoughts about non-standard scripts and the entirety of like an IsStandard check?
Steve: 0:35:58.31,0:37:35.73
I’d actually like to repurpose the concept. I think I mentioned before multi-threaded script validation and having some dedicated well-known script templates - when you say the word well-known script template there’s already a check in Bitcoin that kind of tells you if it's well-known or not and that's IsStandard. I'm generally in favor of getting rid of the notion of standard transactions, but it's actually a decision for miners, and it's really more of a behavioral change than it is a technical change. There's a whole bunch of configuration options that miners can set that affect what they do what they consider to be standard and not standard, but the reality is not too many miners are using those configuration options. So I mean standard transactions as a concept is meaningful to an arbitrary degree I suppose, but yeah I would like to make it easier for people to get non-standard scripts into Bitcoin so that they can experiment, and from discussions of I’ve had with CoinGeek they’re quite keen on making their miners accept, you know, at least initially a wider variety of transactions eventually.
Daniel: 0:37:32.85,0:38:07.95
So I think IsStandard will remain important within the implementation itself for efficiency purposes, right - you want to streamline base use case of cash payments through them and prioritizing. That's where it will remain important but on the interfaces from the node to the rest of the network, yeah I could easily see it being removed.
Cory: 0,0:38:06.24,0:38:35.46
*Connor mentioned that there's some people that disagree with Bitcoin SV and what they're doing - a lot of questions around, you know, why November? Why implement these changes in November - they think that maybe the six-month delay might not cause a split. Well, first off what do you think about the ideas of a potential split and I guess what is the urgency for November?
Steve: 0:38:33.30,0:40:42.42
Well in November there's going to be a divergence of consensus rules regardless of whether we implement these new op codes or not. Bitcoin ABC released their spec for the November Hard fork change I think on August 16th or 17th something like that and their client as well and it included CTOR and it included DSV. Now for the miners that commissioned the SV project, CTOR and DSV are controversial changes and once they're in they're in. They can't be reversed - I mean CTOR maybe you could reverse it at a later date, but DSV once someone's put a P2SH transaction into the project or even a non P2SH transaction in the blockchain using that opcode it's irreversible. So it's interesting that some people refer to the Bitcoin SV project as causing a split - we're not proposing to do anything that anyone disagrees with - there might be some contention about changing the opcode limit but what we're doing, I mean Bitcoin ABC already published their spec for May and it is our spec for the new opcodes, so in terms of urgency - should we wait? Well the fact is that we can't - come November you know it's bit like Segwit - once Segwit was in, yes you arguably could get it out by spending everyone's anyone can spend transactions but in reality it's never going to be that easy and it's going to cause a lot of economic disruption, so yeah that's it. We're putting out changes in because it's not gonna make a difference either way in terms of whether there's going to be a divergence of consensus rules - there's going to be a divergence whether whatever our changes are. Our changes are not controversial at all.
Daniel: 0:40:39.79,0:41:03.08
If we didn't include these changes in the November upgrade we'd be pushing ahead with a no-change, right, but the November upgrade is there so we should use it while we can. Adding these non-controversial changes to it.
Connor: 0:41:01.55,0:41:35.61
Can you talk about DATASIGVERIFY? What are your concerns with it? The general concept that's been kind of floated around because of Ryan Charles is the idea that it's a subsidy, right - that it takes a whole megabyte and kind of crunches that down and the computation time stays the same but maybe the cost is lesser - do you kind of share his view on that or what are your concerns with it?
Daniel: 0:41:34.01,0:43:38.41
Can I say one or two things about this – there’s different ways to look at that, right. I'm an engineer - my specialization is software, so the economics of it I hear different opinions. I trust some more than others but I am NOT an economist. I kind of agree with the ones with my limited expertise on that it's a subsidy it looks very much like it to me, but yeah that's not my area. What I can talk about is the software - so adding DSV adds really quite a lot of complexity to the code right, and it's a big change to add that. And what are we going to do - every time someone comes up with an idea we’re going to add a new opcode? How many opcodes are we going to add? I saw reports that Jihan was talking about hundreds of opcodes or something like that and it's like how big is this client going to become - how big is this node - is it going to have to handle every kind of weird opcode that that's out there? The software is just going to get unmanageable and DSV - that was my main consideration at the beginning was the, you know, if you can implement it in script you should do it, because that way it keeps the node software simple, it keeps it stable, and you know it's easier to test that it works properly and correctly. It's almost like adding (?) code from a microprocessor you know why would you do that if you can if you can implement it already in the script that is there.
Steve: 0:43:36.16,0:46:09.71
It’s actually an interesting inconsistency because when we were talking about adding the opcodes in May, the philosophy that seemed to drive the decisions that we were able to form a consensus around was to simplify and keep the opcodes as minimal as possible (ie where you could replicate a function by using a couple of primitive opcodes in combination, that was preferable to adding a new opcode that replaced) OP_SUBSTR is an interesting example - it's a combination of SPLIT, and SWAP and DROP opcodes to achieve it. So at really primitive script level we've got this philosophy of let's keep it minimal and at this sort of (?) philosophy it’s all let's just add a new opcode for every primitive function and Daniel's right - it's a question of opening the floodgates. Where does it end? If we're just going to go down this road, it almost opens up the argument why have a scripting language at all? Why not just add a hard code all of these functions in one at a time? You know, pay to public key hash is a well-known construct (?) and not bother executing a script at all but once we've done that we take away with all of the flexibility for people to innovate, so it's a philosophical difference, I think, but I think it's one where the position of keeping it simple does make sense. All of the primitives are there to do what people need to do. The things that people don't feel like they can't do are because of the limits that exist. If we had no opcode limit at all, if you could make a gigabyte transaction so a gigabyte script, then you can do any kind of crypto that you wanted even with 32-bit integer operations, Once you get rid of the 32-bit limit of course, a lot of those a lot of those scripts come up a lot smaller, so a Rabin signature script shrinks from 100MB to a couple hundred bytes.
Daniel: 0:46:06.77,0:47:36.65
I lost a good six months of my life diving into script, right. Once you start getting into the language and what it can do, it is really pretty impressive how much you can achieve within script. Bitcoin was designed, was released originally, with script. I mean it didn't have to be – it could just be instead of having a transaction with script you could have accounts and you could say trust, you know, so many BTC from this public key to this one - but that's not the way it was done. It was done using script, and script provides so many capabilities if you start exploring it properly. If you start really digging into what it can do, yeah, it's really amazing what you can do with script. I'm really looking forward to seeing some some very interesting applications from that. I mean it was Awemany his zero-conf script was really interesting, right. I mean it relies on DSV which is a problem (and some other things that I don't like about it), but him diving in and using script to solve this problem was really cool, it was really good to see that.
Steve: 0:47:32.78,0:48:16.44
I asked a question to a couple of people in our research team that have been working on the Rabin signature stuff this morning actually and I wasn't sure where they are up to with this, but they're actually working on a proof of concept (which I believe is pretty close to done) which is a Rabin signature script - it will use smaller signatures so that it can fit within the current limits, but it will be, you know, effectively the same algorithm (as DSV) so I can't give you an exact date on when that will happen, but it looks like we'll have a Rabin signature in the blockchain soon (a mini-Rabin signature).
Cory: 0:48:13.61,0:48:57.63
Based on your responses I think I kinda already know the answer to this question, but there's a lot of questions about ending experimentation on Bitcoin. I was gonna kind of turn that into – with the plan that Bitcoin SV is on do you guys see like a potential one final release, you know that there's gonna be no new opcodes ever released (like maybe five years down the road we just solidify the base protocol and move forward with that) or are you guys more on the idea of being open-ended with appropriate testing that we can introduce new opcodes under appropriate testing.
Steve: 0:48:55.80,0:49:47.43
I think you've got a factor in what I said before about the philosophical differences. I think new functionality can be introduced just fine. Having said that - yes there is a place for new opcodes but it's probably a limited place and in my opinion the cryptographic primitive functions for example CHECKSIG uses ECDSA with a specific elliptic curve, hash 256 uses SHA256 - at some point in the future those are going to no longer be as secure as we would like them to be and we'll replace them with different hash functions, verification functions, at some point, but I think that's a long way down the track.
Daniel: 0:49:42.47,0:50:30.3
I'd like to see more data too. I'd like to see evidence that these things are needed, and the way I could imagine that happening is that, you know, that with the full scripting language some solution is implemented and we discover that this is really useful, and over a period of, like, you know measured in years not days, we find a lot of transactions are using this feature, then maybe, you know, maybe we should look at introducing an opcode to optimize it, but optimizing before we even know if it's going to be useful, yeah, that's the wrong approach.
Steve: 0:50:28.19,0:51:45.29
I think that optimization is actually going to become an economic decision for the miners. From the miner’s point of view is if it'll make more sense for them to be able to optimize a particular process - does it reduce costs for them such that they can offer a better service to everyone else? Yeah, so ultimately these decisions are going to be miner’s main decisions, not developer decisions. Developers of course can offer their input - I wouldn't expect every miner to be an expert on script, but as we're already seeing miners are actually starting to employ their own developers. I’m not just talking about us - there are other miners in China that I know have got some really bright people on their staff that question and challenge all of the changes - study them and produce their own reports. We've been lucky with actually being able to talk to some of those people and have some really fascinating technical discussions with them.
submitted by The_BCH_Boys to btc [link] [comments]

A reminder why CryptoNote protocol was created...

CryptoNote v 2.0 Nicolas van Saberhagen October 17, 2013
1 Introduction
“Bitcoin” [1] has been a successful implementation of the concept of p2p electronic cash. Both professionals and the general public have come to appreciate the convenient combination of public transactions and proof-of-work as a trust model. Today, the user base of electronic cash is growing at a steady pace; customers are attracted to low fees and the anonymity provided by electronic cash and merchants value its predicted and decentralized emission. Bitcoin has effectively proved that electronic cash can be as simple as paper money and as convenient as credit cards.
Unfortunately, Bitcoin suffers from several deficiencies. For example, the system’s distributed nature is inflexible, preventing the implementation of new features until almost all of the net- work users update their clients. Some critical flaws that cannot be fixed rapidly deter Bitcoin’s widespread propagation. In such inflexible models, it is more efficient to roll-out a new project rather than perpetually fix the original project.
In this paper, we study and propose solutions to the main deficiencies of Bitcoin. We believe that a system taking into account the solutions we propose will lead to a healthy competition among different electronic cash systems. We also propose our own electronic cash, “CryptoNote”, a name emphasizing the next breakthrough in electronic cash.
2 Bitcoin drawbacks and some possible solutions
2.1 Traceability of transactions
Privacy and anonymity are the most important aspects of electronic cash. Peer-to-peer payments seek to be concealed from third party’s view, a distinct difference when compared with traditional banking. In particular, T. Okamoto and K. Ohta described six criteria of ideal electronic cash, which included “privacy: relationship between the user and his purchases must be untraceable by anyone” [30]. From their description, we derived two properties which a fully anonymous electronic cash model must satisfy in order to comply with the requirements outlined by Okamoto and Ohta:
Untraceability: for each incoming transaction all possible senders are equiprobable.
Unlinkability: for any two outgoing transactions it is impossible to prove they were sent to the same person.
Unfortunately, Bitcoin does not satisfy the untraceability requirement. Since all the trans- actions that take place between the network’s participants are public, any transaction can be unambiguously traced to a unique origin and final recipient. Even if two participants exchange funds in an indirect way, a properly engineered path-finding method will reveal the origin and final recipient.
It is also suspected that Bitcoin does not satisfy the second property. Some researchers stated ([33, 35, 29, 31]) that a careful blockchain analysis may reveal a connection between the users of the Bitcoin network and their transactions. Although a number of methods are disputed [25], it is suspected that a lot of hidden personal information can be extracted from the public database.
Bitcoin’s failure to satisfy the two properties outlined above leads us to conclude that it is not an anonymous but a pseudo-anonymous electronic cash system. Users were quick to develop solutions to circumvent this shortcoming. Two direct solutions were “laundering services” [2] and the development of distributed methods [3, 4]. Both solutions are based on the idea of mixing several public transactions and sending them through some intermediary address; which in turn suffers the drawback of requiring a trusted third party. Recently, a more creative scheme was proposed by I. Miers et al. [28]: “Zerocoin”. Zerocoin utilizes a cryptographic one-way accumulators and zero-knoweldge proofs which permit users to “convert” bitcoins to zerocoins and spend them using anonymous proof of ownership instead of explicit public-key based digital signatures. However, such knowledge proofs have a constant but inconvenient size - about 30kb (based on today’s Bitcoin limits), which makes the proposal impractical. Authors admit that the protocol is unlikely to ever be accepted by the majority of Bitcoin users [5].
2.2 The proof-of-work function
Bitcoin creator Satoshi Nakamoto described the majority decision making algorithm as “one- CPU-one-vote” and used a CPU-bound pricing function (double SHA-256) for his proof-of-work scheme. Since users vote for the single history of transactions order [1], the reasonableness and consistency of this process are critical conditions for the whole system.
The security of this model suffers from two drawbacks. First, it requires 51% of the network’s mining power to be under the control of honest users. Secondly, the system’s progress (bug fixes, security fixes, etc...) require the overwhelming majority of users to support and agree to the changes (this occurs when the users update their wallet software) [6].Finally this same voting mechanism is also used for collective polls about implementation of some features [7].
This permits us to conjecture the properties that must be satisfied by the proof-of-work pricing function. Such function must not enable a network participant to have a significant advantage over another participant; it requires a parity between common hardware and high cost of custom devices. From recent examples [8], we can see that the SHA-256 function used in the Bitcoin architecture does not posses this property as mining becomes more efficient on GPUs and ASIC devices when compared to high-end CPUs.
Therefore, Bitcoin creates favourable conditions for a large gap between the voting power of participants as it violates the “one-CPU-one-vote” principle since GPU and ASIC owners posses a much larger voting power when compared with CPU owners. It is a classical example of the Pareto principle where 20% of a system’s participants control more than 80% of the votes.
One could argue that such inequality is not relevant to the network’s security since it is not the small number of participants controlling the majority of the votes but the honesty of these participants that matters. However, such argument is somewhat flawed since it is rather the possibility of cheap specialized hardware appearing rather than the participants’ honesty which poses a threat. To demonstrate this, let us take the following example. Suppose a malevolent individual gains significant mining power by creating his own mining farm through the cheap hardware described previously. Suppose that the global hashrate decreases significantly, even for a moment, he can now use his mining power to fork the chain and double-spend. As we shall see later in this article, it is not unlikely for the previously described event to take place.
2.3 Irregular emission
Bitcoin has a predetermined emission rate: each solved block produces a fixed amount of coins. Approximately every four years this reward is halved. The original intention was to create a limited smooth emission with exponential decay, but in fact we have a piecewise linear emission function whose breakpoints may cause problems to the Bitcoin infrastructure.
When the breakpoint occurs, miners start to receive only half of the value of their previous reward. The absolute difference between 12.5 and 6.25 BTC (projected for the year 2020) may seem tolerable. However, when examining the 50 to 25 BTC drop that took place on November 28 2012, felt inappropriate for a significant number of members of the mining community. Figure 1 shows a dramatic decrease in the network’s hashrate in the end of November, exactly when the halving took place. This event could have been the perfect moment for the malevolent individual described in the proof-of-work function section to carry-out a double spending attack [36]. Fig. 1. Bitcoin hashrate chart (source: http://bitcoin.sipa.be)
2.4 Hardcoded constants
Bitcoin has many hard-coded limits, where some are natural elements of the original design (e.g. block frequency, maximum amount of money supply, number of confirmations) whereas other seem to be artificial constraints. It is not so much the limits, as the inability of quickly changing them if necessary that causes the main drawbacks. Unfortunately, it is hard to predict when the constants may need to be changed and replacing them may lead to terrible consequences.
A good example of a hardcoded limit change leading to disastrous consequences is the block size limit set to 250kb1. This limit was sufficient to hold about 10000 standard transactions. In early 2013, this limit had almost been reached and an agreement was reached to increase the limit. The change was implemented in wallet version 0.8 and ended with a 24-blocks chain split and a successful double-spend attack [9]. While the bug was not in the Bitcoin protocol, but rather in the database engine it could have been easily caught by a simple stress test if there was no artificially introduced block size limit.
Constants also act as a form of centralization point. Despite the peer-to-peer nature of Bitcoin, an overwhelming majority of nodes use the official reference client [10] developed by a small group of people. This group makes the decision to implement changes to the protocol and most people accept these changes irrespective of their “correctness”. Some decisions caused heated discussions and even calls for boycott [11], which indicates that the community and the developers may disagree on some important points. It therefore seems logical to have a protocol with user-configurable and self-adjusting variables as a possible way to avoid these problems.
2.5 Bulky scripts
The scripting system in Bitcoin is a heavy and complex feature. It potentially allows one to create sophisticated transactions [12], but some of its features are disabled due to security concerns and some have never even been used [13]. The script (including both senders’ and receivers’ parts) for the most popular transaction in Bitcoin looks like this: OP DUP OP HASH160 OP EQUALVERIFY OP CHECKSIG. The script is 164 bytes long whereas its only purpose is to check if the receiver possess the secret key required to verify his signature.
Read the rest of the white paper here: https://cryptonote.org/whitepaper.pdf
submitted by xmrhaelan to CryptoCurrency [link] [comments]

Non Standard Transaction Redeem Script

If I decide to use non standard transaction in main chain, I wonder is there are some miner pool who accept such transactions?This script should be used as deposit escrow transaction of buyer during trade circle (paypal - btc)."a" is seller, "b" is escrow service, "c" is buyer who must make first deposit transaction before hi can buy btc instantly.Limit per transaction is determined by transaction input, if buyer goes crazy to reverse transaction, seller should track and supply ipn messages as proof to escrow service reveling the scammer and taking his bitcoin back.Before 365 days expire, btc can be spent, but signature of "b" (escrow service) must be included.180 days is limit time for reporting "unauthorized transaction", so buyer will be able to buy 180 days btc instantly after deposit transaction.Still thinking on making the script better, but any effort will be waste of time if I do not find a way to include spend transaction in the block.
Any comment on script below would be helpful too. :)
OP_IF 365days OP_CHECKSEQUENCEVERIFY OP_DROP  OP_CHECKSIG OP_ELSE OP_DUP  OP_CHECKSIGVERIFY 2    3 OP_CHECKMULTISIG OP_ENDIF 
Update:
OP_IF 365days OP_CHECKSEQUENCEVERIFY OP_DROP  OP_CHECKSIG OP_ELSE  OP_CHECKSIGVERIFY 1   2 OP_CHECKMULTISIG OP_ENDIF 
Next proposal:
OP_DUP  OP_CHECKSIG OP_IF OP_DROP 1   2 OP_CHECKMULTISIG OP_ELSE 365days OP_CHECKSEQUENCEVERIFY OP_DROP  OP_CHECKSIG OP_ENDIF 
Update:
I feeling very disappointed.I did not find any way to add non standard transaction into block, even each fork did not go far a way for current btc, they are just forks working on scaling dist space, just like core developers waiting for lighting make there lives easier.
Bitcoin is peer-to-peer cash.
But to become that truly, we must handle current "working" cash, right?Banks, payment processors are blackmailing the people who would like to use this money (of the future), and we allowing them to do it with no problem.You can not put primitive multisig as only complex standard, exclude any alternative and expect from exchanges to work correctly and not being hacked.
Exchanges can not work with primitive multisig and can not work with any of "standard" transaction on the current list in a way to provide secured exit from fiat to crypto.Exchanges are hacked in one way or another. By simulating breach or being hacked for real.Most of exchanges being hacked are still working as before, same as banks, no jail, no money.Basically it is a same sh*t as we have seen in bank bail-in scenario. Nothing is changed.
Bitcoin is able to transfer ownership over the value from one owner to another by building and using trust less, decentralized, censorship resistant environment and all that is possible by math, not by power of central authority.
So exchanges should be able to handle that in a same way, not by collecting bitcoins and store them in the chest like banks do and repeating "everything is alright and we will not be hacked for now"
It is useless if we can not provide real exit way for other peoples who waiting for it.
*MAKE "STANDARD" TRANSACTION LIST LONGER, SO EXCHANGES CAN HANDLE "non authorized bank account transactions", etc..."*
submitted by zninja-bg to Bitcoin [link] [comments]

The Problems with Segregated Witness

MORE: https://medium.com/the-publius-letters/segregated-witness-a-fork-too-far-87d6e57a4179
... 3. The Problems with Segregated Witness
While it is true that Segregated Witness offers some improvements to the Bitcoin network, we shall now examine why these benefits are not nearly enough to outweigh the dangers of deploying SW as a soft fork.
3.1 SW creates a financial incentive for bloating witness data
SW allows for a theoretical maximum block size limit of ~4 MB. However, this is only true if the entire block was occupied with transactions of a very small ‘base size’ (e.g. P2WPKH with 1 input, 1 output). In practice, based on the average transaction size today and the types of transactions made, the block size limit is expected to have a maximum limit of ~1.7 MB post-SW (Figure 10; assuming all transactions are using SW unspent outputs — a big assumption).
However, the 4 MB theoretical limit creates a key problem. Miners and full node operators need to ensure that their systems can handle the 4 MB limit, even though at best they will only be able to support ~40% of that transaction capacity. Why? Because there exists a financial incentive for malicious actors to design transactions with a small base size but large and complex witness data. This is exacerbated by the fact that witness scripts (i.e. P2SH-P2WSH or P2SH-P2WSH) will have higher script size limits that normal P2SH redeem scripts (i.e., from 520 bytes to 3,600 bytes [policy] or 10,000 bytes [consensus]). These potential problems only worsen as the block size limit is raised in the future, for example a 2 MB maximum base size creates an 8 MB adversarial case. This problem hinders scalability and makes future capacity increases more difficult.
3.2 SW fails to sufficiently address the problems it intends to solve
If SW is activated by soft fork, Bitcoin will effectively have two classes of UTXOs (non-SW vs SW UTXOs), each with different security and economic properties. Linear signature hashing and malleability fixes will only be available to the SW UTXO. Most seriously, there are no enforceable constraints to the growth of the non-SW UTXO. This means that the network (even upgraded nodes) are still vulnerable to transaction malleability and quadratic signature hashing from non-SW outputs that existed before or created after the soft fork.
The lack of enforceability that comes with a soft fork leaves Bitcoin users and developers vulnerable to precisely the type of attacks SW is designed to prevent. While spending non-SW outputs will be comparatively more expensive than SW outputs, this remains a relatively weak disincentive for a motivated attacker.
It is also unclear what proportion of the total number of the legacy UTXO will migrate to SW outputs. Long-term holders of Bitcoin, such as Satoshi Nakamoto (presumed to be in possession of ~1 million Bitcoin), may keep their coins in non-SW outputs (although it would be a significant vote of confidence in SW by Nakamoto if they were to migrate!). This makes future soft or hard forks to Bitcoin more difficult as multiple classes of UTXOs must now be supported to prevent coins from being burned or stolen.
One key concern is that the coexistence of two UTXO types may tempt developers and miners in the future to destroy the non-SW UTXO. Some may view this as an unfounded concern, but the only reason that this is worth mentioning in this article are the comments made by influential individuals associated with Bitcoin Core: Greg Maxwell has postulated that “abandoned UTXO should be forgotten and become unspendable,” and Theymos has claimed “the very-rough consensus is that old coins should be destroyed before they are stolen to prevent disastrous monetary inflation.”
As the security properties of SW outputs are marginally better than non-SW outputs, it may serve as a sufficient rationalization for this type of punitive action.
The existence of two UTXO types with different security and economic properties also deteriorates Bitcoin’s fungibility. Miners and fully validating nodes may decide not to relay, or include in blocks, transactions that spend to one type or the other. While on one hand this is a positive step towards enforceability (i.e. soft enforceability), it is detrimental to unsophisticated Bitcoin users who have funds in old or non-upgraded wallets. Furthermore, it is completely reasonable for projects such as the lightning network to reject forming bidirectional payment channels (i.e. a multisignature P2SH address) using non-SW P2SH outputs due to the possibility of malleability. Fundamentally this means that the face-value of Bitcoin will not be economically treated the same way depending on the type of output it comes from.
It is widely understood in software development that measures which rely on the assumption of users changing their behavior to adopt better security practices are fundamentally doomed to fail; more so when the unpatched vulnerabilities are permitted to persist and grow. An example familiar to most readers would be the introduction and subsequent snail’s pace uptake of HTTPS.
3.3 SW places complex requirements on developers to comply while failing to guarantee any benefits
SW as a soft fork brings with it a mountain of irreversible technical debt, with multiple opportunities for developers to permanently cause the loss of user funds. For example, the creation of P2SH-P2WPKH or P2SH-P2WSH addresses requires the strict use of compressed pubkeys, otherwise funds can be irrevocably lost. Similarly, the use of OP_IF, OP_NOTIF, OP_CHECKSIG, and OP_CHECKMULTISIG must be carefully handled for SW transactions in order to prevent the loss of funds. It is all but certain that some future developers will cause user loss of funds due to an incomplete understanding of the intricacies of SW transaction formats.
In terms of priorities, SW is not a solution to any of the major support ticket issues that are received daily by Bitcoin businesses such as BitPay, Coinbase, Blockchain.info, etc. The activation of SW will not increase the transaction capacity of Bitcoin overnight, but only incrementally as a greater percentage of transactions spend to SW outputs. Moreover, the growing demand for on-chain transactions may very well exceed the one-off capacity increase as demonstrated by the increasing frequency of transaction backlogs.
In contrast to a basic block size increase (BBSI) from a coordinated hard fork, many wallets and SPV clients will immediately benefit from new capacity increases without the need to rewrite their own software as they must do with SW.. With a BBSI, unlike SW, there are no transaction format or signature changes required on the part of Bitcoin-using applications.
Based on previous experience with soft forks in Bitcoin, upgrades tend to roll-out within the ecosystem over some time. At the time of this writing, only 28 out of the 78 business and projects (36%) who have publicly committed to the upgrade are SW-compatible. Any capacity increase that Bitcoin businesses and users of the network desire to ease on-chain fee pressure will unlikely be felt for some time, assuming that transaction volume remains unchanged and does not continue growing. The unpredictability of this capacity increase and the growth of the non-SW UTXO are particularly troubling for Bitcoin businesses from the perspectives of user-growth and security, respectively. Conversely, a BBSI delivers an immediate and predictable capacity increase.
The voluntary nature of SW upgrades is subject to the first-mover game theory problem. With a risky upgrade that moves transaction signatures to a new witness field that is hidden to some nodes, the incentive for the rational actor is to let others take that risk first, while the rational actor sits back, waits, and watches to see if people lose funds or have problems. Moreover, the voluntary SW upgrade also suffers from the free-rider game theory problem. If others upgrade and move their data to the witness field, one can benefit even without upgrading or using SW transactions themselves. These factors further contribute to the unpredictable changes to Bitcoin’s transaction capacity and fees if SW is adopted via a soft fork.
3.4 Economic distortions and price fixing
Segregated Witness as a soft fork alters the economic incentives that regulate access to Bitcoin’s one fundamental good: block-size space. Firstly, it subsidises signature data in large/complex P2WSH transactions (i.e., at ¼ of the cost of transaction/UTXO data). However, the signatures are more expensive to validate than the UTXO, which makes this unjustifiable in terms of computational cost. The discount itself appears to have been determined arbitrarily and not for any scientific or data-backed reasoning.
Secondly, the centralized and top-down planning of one of Bitcoin’s primary economic resources, block space, further disintermediates various market forces from operating without friction. SW as a soft fork is designed to preserve the 1 MB capacity limit for on-chain transactions, which will purposely drive on-chain fees up for all users of Bitcoin. Rising transaction fees, euphemistically called a ‘fee market’, is anything but a market when one side — i.e. supply — is fixed by central economic planners (the developers) who do not pay the costs for Bitcoin’s capacity (the miners). Economic history has long taught us the results of non-market intervention in the supply of goods and services: the costs are externalised to consumers. The adoption of SW as a soft fork creates a bad precedent for further protocol changes that affirm this type of economic planning.
3.5 Soft fork risks
In this section we levy criticisms of soft forks more broadly when they affect the protocol and economic properties of Bitcoin to the extent that SW does. In this case, a soft fork reduces the security of full nodes without the consent of the node operator. The SW soft fork forces node operators either to upgrade, or to unconditionally accept the loss of security by being downgraded to a SPV node.
Non-upgraded nodes further weaken the general security of Bitcoin as a whole through the reduction of the number of fully validating nodes on the network. This is because non-upgraded nodes will only perform the initial check to see if the redeem script hash matches the pubkey script hash of the unspent output. This redeem script may contain an invalid witness program, for P2WSH transactions, that the non-upgraded node doesn’t know how to verify. This node will then blindly relay the invalid transaction across the network.
SW as a soft fork is the opposite of anti-fragile. Even if the community wants the change (i.e., an increase in transaction capacity), soft-forking to achieve these changes means that the miners become the key target of lobbying (and they already are). Soft forks are risky in this context because it becomes relatively easy to change things, which may be desirable if the feature is both minor and widely beneficial. However, it is bad in this case because the users of Bitcoin (i.e. everyone else but the miners) are not given the opportunity to consent or opt-out, despite being affected the most by such a sweeping change. This can be likened to a popular head of state who bends the rules of jurisprudence to bypass slow legal processes to “get things done.” The dangerous precedent of taking legal shortcuts is not of concern the masses until a new, less popular leader takes hold of the reigns, and by then it is too late to reverse. In contrast, activating SW via a hard fork ensures that the entire community, not just the miners, decide on changes made to the protocol. Users who unequivocally disagree with a change being made are given the clear option not to adopt the change — not so with a soft fork.
3.6 Once activated, SW cannot be undone and must remain in Bitcoin codebase forever.
If any critical bugs resulting from SW are discovered down the road, bugs serious enough to contemplate rolling it back, then anyone will be able to spend native SW outputs, leading to a catastrophic loss of funds. ...
...
Conclusion
Segregated Witness is the most radical and irresponsible protocol upgrade Bitcoin has faced in its eight year history. The push for the SW soft fork puts Bitcoin miners in a difficult and unfair position to the extent that they are pressured into enforcing a complicated and contentious change to the Bitcoin protocol, without community consensus or an honest discussion weighing the benefits against the costs. The scale of the code changes are far from trivial — nearly every part of the codebase is affected by SW.
While increasing the transaction capacity of Bitcoin has already been significantly delayed, SW represents an unprofessional and ineffective solution to both transaction malleability and scaling. As a soft fork, SW introduces more technical debt to the protocol and fundamentally fails to achieve its design purpose. As a hard fork, combined with real on-chain scaling, SW can effectively mitigate transaction malleability and quadratic signature hashing. Each of these issues are too important for the future of Bitcoin to gamble on SW as a soft fork and the permanent baggage that comes with it.
As much as the authors of this article desire transaction capacity increases, it is far better to work towards a clean technical solution to malleability and scaling than to further encumber the Bitcoin protocol with permanent technical debt. ...
MORE: https://medium.com/the-publius-letters/segregated-witness-a-fork-too-far-87d6e57a4179
submitted by german_bitcoiner to btc [link] [comments]

Defining bitcoin for lawyers

Hi guys,
I am in the process of writing a few legal articles on bitcoin, and I am looking to try and define what "bitcoin" itself is. The audience of this definition is non-technical, legal academics and practitioners.
I have prepared two definitions, the first a short "what do you have when you have a bitcoin" definition, and the second being a slightly more technical definition. I am wondering if you guys could check whether the definitions that I am proposing are not incorrect?
[edit: for the abundance of clarity - what I am trying to define is lowercase bitcoin - i.e. the unit, not Bitcoin, the system. What is important is what I actually "own" when I have a bitcoin]
Simple definition
  1. A bitcoin is a pairing of a locked unit in a ledger (the block chain) and the credentials required to transfer it (a signature that is a function of the holder’s public and private key, and the specified unit in the ledger)
  2. Upon registration of the transfer in the block chain, the credentials used by the transferor become ineffective, and the transferee’s credentials are replaced as the controlling credentials of the unit.
Do you think the contents of the brackets is necessary to make the definition clear, without scaring non-techies?
More technical definition
  1. A bitcoin is the pairing of a locked unit in the ledger (as recorded on the block chain), and the username (the public key/wallet address) and password (the private key) that enable a user to transfer it.
  2. If the putative owner of the bitcoin, who holds the correct credentials, transfers the bitcoin to another user, the destination wallet address, the quantum of the transfer as well as a signature are broadcasted. This signature is a (the op_checksig) “proof of ownership” of the unit in the ledger (a hash that is a function of the public key, private key and specific unit in the public ledger).
  3. This is verified by the bitcoin algorithm.
  4. Upon successful transfer (being written into the blockchain by the miners), the signature that was used by the transferor becomes ineffective, and the transferee’s credentials become the required credentials for a future transfer of the same unit.
I am also working on turning a description of the processes of bitcoin transactions into BPMN diagrams, and would love to know more about those as well!
I'd be grateful for any comments!
submitted by oatsandsugar to Bitcoin [link] [comments]

Qtum - Quantum Chain Design Document

Serialization: Qtum Foundation Design Document

Foreword
In this series of articles, the Qtum Quantum Chain Foundation will make public its early design documents for the first time, hoping to help the community understand the design intent of Qtum and the implementation details of key technologies. The article will be based on the original design draft in order to restore the designer's original ideas. Follow-up Qtum project team will be further collation and interpretation, to help readers understand more technical details, so stay tuned.
The topics that may be included in this series include
* Qtum account abstraction layer AAL
* Qtum distributed autonomous protocol DGP
* Qtum wallet (qt, mobile wallet, etc.) and browser
* Add RPC call
* Mutual interest consensus mechanism MPoS
* Add opcode
* Integration of EVM and Qtum blockchain
* Qtum x86 virtual machine
* Others...
The Qtum quantum chain public number will be updated from time to time around the above topics to restore the history of the Qtum project and key technologies from scratch.
Qtum original design document summary -- Qtum new OPCODE
As we all know, Qtum uses the same UTXO model as Bitcoin. The original UTXO script was not compatible with the EVM account model, so Qtum added three OP_CREATE, OP_CALL, and OP_SPEND opcodes to the UTXO transaction script for the purpose of providing operational support for conversions between UTXO and EVM account models. The original names of the three opcodes are OP_EXEC(OP_CREATE), OP_EXEC_ASSIGN(OP_CALL) and OP_TXHASH(OP_SPEND), respectively.
The following is an excerpt of representative original documents for interested readers.
OP_CREATE (or OP_EXEC**)**
OP_CREATE (or OP_EXEC) is used to create a smart contract. The original design files (with Chinese translation) related to this opcode by the Qtum development team are as follows (ps: QTUM <#> or QTUMCORE<#> in the document numbering internal design documents. ):
QTUMCORE-3:Add EVM and OP_CREATE for contract execution Description:After this story, the EVM should be integrated and a very basic contract should be capable of being executed. There will be a new opcode, OP_CREATE (formerly OP_EXEC), which takes 4 arguments, in push order: 1. VM version (currently 1 is EVM) 2. Gas price (not yet used, anything is valid) 3. Gas limit (not yet used, assume very high limit) 4. bytecodeFor now it is OK that this script format be forced and mandatory for OP_CREATE transactions on the blockchain. (ie, only "standard" allowed on the blockchain) When OP_CREATE is encountered, it should execute the EVM and persist the contract to a database (triedb) Note: Make sure to follow policy for external code (commit vanilla unmodified code first, and then change it as needed) Make the EVM test suite functional as well (someone else can setup continuous integration changes for it though) 
The above document describes the functions required by OP_CREATE and the parameters used.

OP_CALL (or OP_EXEC_ASSIGN)

OP_CALL is used for contract execution and is one of the most commonly used opcodes. There are many descriptions in the original design document.
QTUM6: Implement calling environment info in EVM for OP_EXEC_ASSIGN 
Description: Solidity expects certain information to be pushed onto the stack as part of it's ABI. So, when data is sent into the contract using OP_EXEC_ASSIGN we need to make sure to provide this data. This data includes the Solidity "function selector" as well as ensuring the opcodes CALLER and ORIGIN function properly. This looks to be fairly easy, it should just be transferring some data from the Bitcoin stack to the EVM stack, and setting some fields for the origin info. However, this story should be split into multiple tasks and re-evaluated if it isn't easy. See also: https://github.com/ethereum/wiki/wiki/Ethereum-Contract-ABI For populating the CALLER and ORIGIN value, the following should be done: OP_EXEC_ASSIGN should take 2 extra arguments, SENDER and SENDER_SIGNATURE. Sender should be a public key. Sender Signature is the signature of all the vins for the current transaction, signed of course using the SENDER value.On the EVM side, CALLER's value will be a public key hash, ie, a hash of the SENDER public key. This public key hash should be compatible with Bitcoin's public key hash for it's standard version 1 addresses. IF the given SENDER_SIGNATURE does not match successfully, then the transaction should be considered invalid. If the SENDER public key is 0, then SENDER_SIGNATURE must also be 0, and the given CALLER opcode etc should just return 0.
The above document describes the OP_EXEC_ASSIGN calling environment information that needs to be implemented in the EVM.
QTUM8: Implement OP_EXEC_ASSIGN for sending money to contracts 
Description: A new opcode should be added, OP_EXEC_ASSIGN. This opcode should take these arguments in push order: # version number (VM version to use, currently just 1)

gas price (can be ignored for now)

gas refund script (can be ignored for now)

data (The data to hand to the smart contract. This will include things like the Solidity ABI Function Selector and other data that will later be available using the CALLERDATA EVM opcode) # smart contract address (txid + vout number)

It should return two values right now, 0 and 0. These are for spendable and out of gas, respectively. Making them spendable and dealing with out of gas will be in a future storyFor this story, the EVM contract does not actually need to be executed. This opcode should only be a placeholder so that the accounting system can determine how much money a contract has control of
The above document describes the OP_EXEC_ASSIGN implementation details.
QTUM15: Execute the relevant contract during OP_EXEC_ASSIGN 
Description: After this story is complete, when OP_EXEC_ASSIGN is reached, it should actually execute the contract whose address was given to it, passing the relevant data from the bitcoin script stack with it. Other data such as the caller and sender can be left for a later story. Making the CALLER, ORIGIN etc opcodes work properly will be fixed with a later story
The above document describes OP_EXEC_ASSIGN how the script runs the relevant contract code.
QTUM40: Allow contracts to send money to pubkeyhash addresses Description: We need to allow contracts to send money back to pubkeyhash addresses, so that people can withdraw their coins from contracts when allowed, etc. This should work similar to how version 0 contract sends work. Instead of using an OP_EXEC_ASSIGN vout though, we need to instead use a standard pubkeyhash script. So, upon spending to a pubkeyhash, the following transaction should be placed on the blockchain: vin: [standard contract OP_EXEC_ASSIGN inputs] ... vout: OP_DUP OP_HASH160 [pubKeyHash] OP_EQUALVERIFY OP_CHECKSIG change output - version 0 OP_EXEC_ASSIGN back to spending contract These outputs should be directly spendable in the wallet with no changes to the wallet code itself 
The above document describes how to allow contracts to send QTUM to pubkeyhash addresses.
QTUMCORE-10:Add ability for contracts to call other deployed contracts Description:Contracts should be capable of calling other contracts using a new opcode, OP_CALL. Arguments in push order:version (32 bit integer) gas price (64 bit integer) gas limit (64 bit integer) contract address (160 bits) data (any length) OP_CALL should ways return false for now. OP_CALL only results in contract execution when used in a vout; Similar to OP_CREATE, it uses the special rule to process the script during vout processing (rather than when spent as is normal in Bitcoin). Contract execution should only be triggered when the transaction script is in this standard format and has no extra opcodes. If OP_CALL is created that uses an invalid contract address, then no contract execution should take place. The transaction should still be valid in the blockchain however. If money was sent with OP_CALL, then that money (minus the gas fees) should result in a refund transaction to send the funds back to vin[0]'s vout script. The "sender" exposed to EVM should be the pubkeyhash spent by vin[0]. If the vout spent by vin[0] is not a pubkeyhash, then the sender should be 0.Funds can be sent to the contract using an OP_CALL vout. These funds will be handled by the account abstraciton layer in a different story, to expose this to the EVM. Multiple OP_CALLS can be used in a single transaction. However, before contract execution, the gas price and gas limit of each OP_CALL vout should be checked to ensure that the transaction provides enough transaction fees to cover the gas. Additionally, this should be verified even when the contract is not executed, such as when it is accepted in the mempool. 
The above document describes how the contract calls other contracts via OP_CALL.

OP_SPEND (or OP_TXHASH, OP_EXEC_SPEND)

OP_SPEND is used for the cost of the contract balance. Because the contract address is a special address, in order to ensure consensus, the UTXO needs to be specially processed. Therefore, there are more descriptions of the OP_SPEND operation code in the original design document.
QTUM20: Create OP_EXEC_SPEND transaction when a contract spends money 
Description: When a CALL opcode or similar to used from an EVM contract to send another contract money, this should be shown on the blockchain as a new transaction. When a money transfer is done in the contract, the miner should add a new transaction exactly after the currently processing transaction in the block. This transaction should spend an input owned by the contract by using EXEC_SPEND in it's redeemScript. For the purposes of this story, assume change is not something to be worried about and consume as many inputs are needed. Properly picking effecient coins and sending back money to the originating contract will come in a later story. Edge cases to watch for: The transaction for sending money to the contract must come directly after the executing transaction. The outputs should use a version-0 OP_EXEC_ASSIGN vout, so that if the transaction were received out of context, it would still mean to not execute the contract.
The above document describes the timing of creating a OP_SPEND transaction.
QTUM21: Create consensus-critical change and coin-picking algorithm for OP_EXEC_SPEND transactions Description: Building on #20, now a consensus-critical algorithm must be made that picks the most optimal outputs belonging to the contract, and spends them, and also makes a change output that returns the "change" from the transaction back to the contract. All outputs in this case should be using a version-0 OP_EXEC_ASSIGN, to avoid running into the limitation that prevents more than one (version 1) OP_EXEC_ASSIGN transaction from being in a single transaction. The transaction should have as many vins as needed, and exactly 2 vouts. The first vout to go to the target contract, and the second vout to send change back to the source contract. 
QTUM22: Disallow more than one EVM execution per transaction
Description: In order to avoid significant edge cases, for now, disallow more than one EVM execution to take place in a single transaction. This includes both deployment and fund assignment vouts. Instead, such things should be split into multiple transactions If two EVM executions are encountered, the transaction should be treated as completely invalid and not suitable for broadcast nor putting into a block
QTUM23: Add "version 0" OP_EXEC_ASSIGN, which does not execute EVM Description: To counteract problems from #22, we should allow OP_EXEC_ASSIGN to be used to fund a contract without the contract actually being executed. This will be used later for "change" outputs to (multiple) contracts. If the version number passed in for OP_EXEC_ASSIGN is 0, then the contract is not executed. Also, this is only valid if the data provided to OP_EXEC_ASSIGN is just a single byte "0". Multiple version-0 OP_EXEC_ASSIGN vouts should be valid in a transaction, or 1 non-version-0 OP_EXEC_ASSIGN (or an OP_EXEC deployment) and multiple version-0 OP_EXEC_ASSIGN vouts. This will be used for all money spending that is sent from a contract to another contract
The above three documents describe that if the consensus-associated coin-picking algorithm guarantees that the OP_SPEND opcode does not cause a consensus error, the correctness of the change is ensured. At the same time, it describes the situation where the contract does not need to be run and how it is handled.
QTUM34: Disallow OP_EXEC and OP_EXEC_ASSIGN from coinbase transactions Description: Because of problems with coinbase maturity and potential side effects from ordering of gas-refund scripts, it should not be legal for coinbase outputs to be anything which results in EVM execution or directly changing EVM account balances. This includes version 0 OP_EXEC_ASSIGN outputs. 
The above document stipulates that coinbase transactions should not include contract-related scripts.

Other related documents

In addition, there are some documents describing the infrastructure needed for the new operation code.
QTUMCORE-51:Formalize the version field for OP_CREATE and OP_CALL Description:In order to sustain future extensions to the protocol, we need to set some rules for how we will later upgrade and add new VMs by changing the "version" argument to OP_CREATE and OP_CALL. We need a definitive VM version format beyond our current "just increment when doing upgrades". This would allow us to more easily plan upgrades and soft-forks. Proposed fields: 
  1. VM Format (can be increased from 0 to extend this format in the future): 2 bits2. Root VM - The actual VM to use, such as EVM, Lua, JVM, etc: 6 bits
  2. VM Version - The version of the Root VM to use (for upgrading the root VM with backwards compatibility) - 8 bits
  3. Flag options - For flags to the VM execution and AAL: 16 bits Total: 32 bits (4 bytes). Size is important since it will be in every EXEC transaction Flag option bits that control contract creation: (only apply to OP_CREATE) • 0 (reserve) Fixed gas schedule - if true, then this contract chooses to opt-out of allowing different gas schedules. Using OP_CALL with a gas schedule other than the one specified in it's creation will result in an immediate exception and result in an out of gas refund condition • 1 (reserve) Enable contract admin interface (reserve only, this will be implemented later. Will allow contracts to control for themselves what VM versions and such they allow, and allows the values to be changed over the lifecycle of the contract) • 2 (reserve) Disallow version 0 funding - If true, this contract is not capable of receiving money through version 0 OP_CALL, other than as required for the account abstraction layer. • bits 3-15 available for future extensions Flag options that control contract calls: (only apply to OP_CALL) • (none yet) Flag options that control both contract calls and creation: • (none yet) These flags will be implemented in a later story Note that the version field now MUST be a 4 byte push. A standard EVM contract would now use the version number (in hex) "01 00 00 00" Consensus behavior: VM Format must be 0 to be valid in a block Root VM can be any value. 1 is EVM, 0 is no-exec. All other values result in no-exec (allowed, but the no execution, for easier soft-forks later) VM Version can be any value (soft-fork compatibility). If a new version is used than 0 (0 is initial release version), then it will execute using version 0 and ignore the value Flag options can be any value (soft-fork compatibility). (inactive flag fields are ignored) Standard mempool behavior: VM Format must be 0Root VM must be 0 or 1VM Version must be 0Flag options - all valid fields can be set. All fields that are not assigned must be set to 0Defaults for EVM: VM Format: 0Root VM: 1VM Version: 0Flags: 0
The above documents formally identified OP_CREATE and OP_CALL needed version information, paving the way for subsequent multi-virtual machine support for Qtum.
QTUMCORE-52:Contract Admin Interface Description:(note, this isn't a goal for mainnet, though it would be a nice feature to include) It should be possible to manage the lifecycle of a contract internally within the contract itself. Such variables and configuration values that might need to be changed over the course of a contract's lifecycle: • Allowable gas schedules 
• Allowable VM versions (ie, if a future VM version breaks this contract, don't allow it to be used, as well as deprecating past VM versions from being used to interact with this contract) • Creation flags (the version flags in OP_CREATE) All of these variables must be able to be controlled within the contract itself, using decentralized code. For instance, in a DAO scenario, it might be something that participants can vote on within the contract, and then the contract triggers the code that changes these parameters. In addition, a contract should be capable of detecting it's own settings throughout it's execution as well as when it is initially created. I propose implementing this interface as a special pre-compiled contract. For a contract ot interact with it, it would call it using the Solidity ABI like any other contract. Proposed ABI for the contract: • bytes[2048] GasSchedule(int n) • int GasScheduleCount() • int AddGasSchedule(bytes[2048] • bytes[32] AllowedVMVersions() • void SetAllowedVMVersions(bytes[32]) Alternative implementations: There could be a specific Solidity function which is called in order to validate that the contract should allow itself to be called in a particular manner: pragma solidity 0.4.0; contract BlockHashTest {function BlockHashTest() { }function ValidateGasSchedule(bytes32 addr) public returns (bool) {if(addr=="123454") { return true; //allow contract to run }return false; //do not allow contract to run}function ValidateVMVersion(byte version) public returns (bool){if(version >= 2 && version < 10) { return true; //allow to run on versions 2-9. Say for example 1 had a vulnerability in it, and 10 broke the contract }return false; } } In this way a contract is responsible for managing it's own state. The basic way it would work is that when a you use OP_CALL to call a contract, it would first execute these two functions (and their execution would be included in gas costs). If either function returns false, then it immediately triggers an out of gas condition and cancels execution. It's slightly complicated to manage the "ValidateVMVersion" callback however, because we must decide which VM version to use. A bad one could cause this function itself to misbeha`ve.```````
pragma solidity 0.4.0; contract BlockHashTest {function BlockHashTest() { }function ValidateGasSchedule(bytes32 addr) public returns (bool) {if(addr=="123454") { return true; //allow contract to run }return false; //do not allow contract to run}function ValidateVMVersion(byte version) public returns (bool){if(version >= 2 && version < 10) { return true; //allow to run on versions 2-9. Say for example 1 had a vulnerability in it, and 10 broke the contract }return false; }
} 
The above document describes the management interface of the contract, and yes the contract can manage its own status.
QTUMCORE-53:Add opt-out flags to contracts for version 0 sends Description:Some contracts may wish to opt-out of certain features in Qtum that are not present in Ethereum. This way more Ethereum contracts can be ported to Qtum without worrying about new features in the Qtum blockchain Two flag options should be added to the Version field, which only have an effect in OP_CREATE for creating the contract: 2. (1st bit) Disallow "version 0" OP_CALLs to this contract outside of the AAL. (DisallowVersion0)  If this is enabled, then an OP_CALL using "root VM 0" (which causes no execution) is not allowed to be sent to this contract. If money is attempted to be sent to this contract using "version 0" OP_CALL, then it will result in an out of gas exception and the funds should be refunded. Version 0 payments made internally within the Account Abstraction Layer should not be affected by this flag. Along with these new consensus rules, there should also be some standard mempool checks: 
  1. If an OP_CALL tx uses a different gas schedule than the contract creation, and the disallow dynamic gas flag is set, then the transaction should be rejected from the mempool as a non-standard transaction (version 0 payments should not be allowed as standard transactions in the mempool anyway)
The above document describes how to get better EVM compatibility by ignoring certain Qtum specific features in order to port Ethereum contract code. This makes smart contracts in the Ethereum ecosystem more easily compatible with Qtum.

summary

The Qtum original design document presented above describes Qtu's increased opcode associated with the contract run, laying the groundwork for subsequent Qtum's EVM VMs that are compatible with the account model on top of the UTXO model, and also making the account abstraction layer AAL possible. The subsequent Qtum project team will further interpret the key documents. If you have any questions, readers can post comments in the comments area or contact the Qtum project team .
The Qtum quantum chain public number will be updated from time to time around the above topics to restore the history of the Qtum project and key technologies from scratch .
Please note that based on Patrick Dai's translation request, the content in this material is translated to English and published on Reddit.
OP's Qtum Address: QMmYAMEFgvPJGwK9nrwqYw1DHhBkiuEi78
submitted by szhman to Qtum [link] [comments]

Make 0-conf double spend infeasable by fixing the r value ahead of time, thereby putting the coins up for grabs when a second transaction is signed

I'm proposing a new kind of transaction that would make double spending 0-conf transactions uneconomical, and thus reducing the risk of double spending for the recipient.
Theory
Creating an ECDSA signature involves choosing a random number k. From this number, a value r is derived that is revealed as part of the signature. Quoting from Wikipedia:
As the standard notes, it is crucial to select different k for different signatures, otherwise the equation in step 6 can be solved for d_A, the private key
This is how a while back, coins were stolen from blockchain.info wallets because a bug in their random number generation caused the same k to be used, revealing the private key.
But we can use this to our advantage to discourage double spending. What if embedded in the unspent output is already determined what r value must be used to sign the transaction that spends those coins? That means that signing 2 different transactions that both spend the same output means revealing the private key and putting the coins up for grabs by anyone (effectively the next miner to mine a block).
This does not guarantee the recipient of a 0-conf transaction will receive their coins but it does make double spending uneconomical as the attacker would not get the coins back for themselves. This should reduce the risk of double spending for the recipient.
Implementation
We could introduce a new operation to the Bitcoin scripting language, similar to OP_CHECKSIG, but with one more input: the r value. When a transaction is created that spends such an output, the specified r must be used in the signature or the transaction is invalid.
One problem is that r must be determine by the recipient of the output while the output is created by the sender. I see 3 potential solutions for this:
  1. Use a P2SH address that has the r value embedded. The problem with this is that each output spent to this address would have to use the same r. So it is crucial that coins are spent to this address no more than once.
  2. When accepting coins, specify the r together with the address that you want to receive the coins to. For example by using the payment protocol BIP 70.
  3. When planning to do a 0-conf transaction, the sender should first move their coins into a new output that fixes the value of r in advance.
To me (2) seems the most promising.
Note: I'm no crypto expert so it would be great if someone could verify and confirm that this would work.
submitted by dskloet to btc [link] [comments]

Best Bitcoin Mining Software 2020  Best Bitcoin Miner JUNE 2020  Great BTC Miner PRO How to diagnose and remove a bitcoin miner trojan Bitcoin Miner software PC 2020 windows 10 free download  Mining 0.36 BTC free With Payment Proof BITCOIN GENERATOR FREE BITCOIN MINER 2020 100% LEGIT BITCOIN MONEY ADD The Latest Bitcoin BTC Miner! Bitmain Antminer T19 84 Th/s Review

The figure above shows the main parts of a Bitcoin transaction. Each transaction has at least one input and one output. Each input spends the satoshis paid to a previous output. Each output then waits as an Unspent Transaction Output (UTXO) until a later input spends it. When your Bitcoin wallet tells you that you have a 10,000 satoshi balance, it really means that you have 10,000 satoshis The aim of this guide to help you understand the logic behind Bitcoin Script. Since there will be too much to cover, the guide will be divided into two parts. -AMAZONPOLLY-ONLYWORDS-START- Bitcoin was created for one purpose alone…transactions. Bitcoin was able to show the world that a payment system can exist on a decentralized peer-to-peer system. A miner can also include additional information in the coinbase document such as services offered, contact information. Third parties can be confident that any message signed using the private key of the miner Id comes from that miner and can be used to for further chain analysis, for example calculating the number of blocks that miner has mined. As far as we are aware, Mist is the first fully autonomous, decentralized, mineable token built on Bitcoin. Prior to Mist, the process of minting new tokens was solely in the control of the token's creator. The novel concept of mining SLP tokens enables a Bitcoin based token to be decentralized with a permissionless mining reward process. The goal of this post is to give a thorough introduction to the most common type of Bitcoin transactions, Pay-to-Public-Key-Hash (P2PKH). To achieve this, the following is presented: A P2PKH…

[index] [1752] [4106] [4056] [19776] [5653] [22030] [1822] [1436] [25694] [17612]

Best Bitcoin Mining Software 2020 Best Bitcoin Miner JUNE 2020 Great BTC Miner PRO

CheckSig provides transparent bitcoin custody for institutional investors and high-net-worth individuals. The custody solution is unique as it is based on a public open protocol which rejects ... Smart Bitcoin Miner V1.0 is the perfect Windows mining software for beginners and experts alike, offering a ton of useful features that will help anyone get the greatest amount of Bitcoins with ... bitcoin, bitcoin generator, bitcoin mining, free bitcoin mining, free bitcoin, free bitcoin generator, btc generator, free btc, mining, cryptocurrency, bitcoin generator 2020, bitcoin miner, earn ... #bitcoin #bitcoinminer #giveaway bitcoin,bitcoin mining,free bitcoin mining,free bitcoin,bitcoin giveaway,earn bitcoin,bitcoin mining software,bitcoin generator,best bitcoin mining software ... The best bitcoin miner what i use was able to mine 0.00003 BTC per day with my GPU,when this miner can mine at least 1.7 BTC per day with your CPU,not with GPU. Yes,this is how much power have ...

Flag Counter