Public Key Hash - How Does Bitcoin Work?

Proof Of Work Explained

Proof Of Work Explained
https://preview.redd.it/hl80wdx61j451.png?width=1200&format=png&auto=webp&s=c80b21c53ae45c6f7d618f097bc705a1d8aaa88f
A proof-of-work (PoW) system (or protocol, or function) is a consensus mechanism that was first invented by Cynthia Dwork and Moni Naor as presented in a 1993 journal article. In 1999, it was officially adopted in a paper by Markus Jakobsson and Ari Juels and they named it as "proof of work".
It was developed as a way to prevent denial of service attacks and other service abuse (such as spam on a network). This is the most widely used consensus algorithm being used by many cryptocurrencies such as Bitcoin and Ethereum.
How does it work?
In this method, a group of users competes against each other to find the solution to a complex mathematical puzzle. Any user who successfully finds the solution would then broadcast the block to the network for verifications. Once the users verified the solution, the block then moves to confirm the state.
The blockchain network consists of numerous sets of decentralized nodes. These nodes act as admin or miners which are responsible for adding new blocks into the blockchain. The miner instantly and randomly selects a number which is combined with the data present in the block. To find a correct solution, the miners need to select a valid random number so that the newly generated block can be added to the main chain. It pays a reward to the miner node for finding the solution.
The block then passed through a hash function to generate output which matches all input/output criteria. Once the result is found, other nodes in the network verify and validate the outcome. Every new block holds the hash of the preceding block. This forms a chain of blocks. Together, they store information within the network. Changing a block requires a new block containing the same predecessor. It is almost impossible to regenerate all successors and change their data. This protects the blockchain from tampering.
What is Hash Function?
A hash function is a function that is used to map data of any length to some fixed-size values. The result or outcome of a hash function is known as hash values, hash codes, digests, or simply hashes.
https://preview.redd.it/011tfl8c1j451.png?width=851&format=png&auto=webp&s=ca9c2adecbc0b14129a9b2eea3c2f0fd596edd29
The hash method is quite secure, any slight change in input will result in a different output, which further results in discarded by network participants. The hash function generates the same length of output data to that of input data. It is a one-way function i.e the function cannot be reversed to get the original data back. One can only perform checks to validate the output data with the original data.
Implementations
Nowadays, Proof-of-Work is been used in a lot of cryptocurrencies. But it was first implemented in Bitcoin after which it becomes so popular that it was adopted by several other cryptocurrencies. Bitcoin uses the puzzle Hashcash, the complexity of a puzzle is based upon the total power of the network. On average, it took approximately 10 min to block formation. Litecoin, a Bitcoin-based cryptocurrency is having a similar system. Ethereum also implemented this same protocol.
Types of PoW
Proof-of-work protocols can be categorized into two parts:-
· Challenge-response
This protocol creates a direct link between the requester (client) and the provider (server).
In this method, the requester needs to find the solution to a challenge that the server has given. The solution is then validated by the provider for authentication.
The provider chooses the challenge on the spot. Hence, its difficulty can be adapted to its current load. If the challenge-response protocol has a known solution or is known to exist within a bounded search space, then the work on the requester side may be bounded.
https://preview.redd.it/ij967dof1j451.png?width=737&format=png&auto=webp&s=12670c2124fc27b0f988bb4a1daa66baf99b4e27
Source-wiki
· Solution–verification
These protocols do not have any such prior link between the sender and the receiver. The client, self-imposed a problem and solve it. It then sends the solution to the server to check both the problem choice and the outcome. Like Hashcash these schemes are also based on unbounded probabilistic iterative procedures.
https://preview.redd.it/gfobj9xg1j451.png?width=740&format=png&auto=webp&s=2291fd6b87e84395f8a4364267f16f577b5f1832
Source-wiki
These two methods generally based on the following three techniques:-
CPU-bound
This technique depends upon the speed of the processor. The higher the processor power greater will be the computation.
Memory-bound
This technique utilizes the main memory accesses (either latency or bandwidth) in computation speed.
Network-bound
In this technique, the client must perform a few computations and wait to receive some tokens from remote servers.
List of proof-of-work functions
Here is a list of known proof-of-work functions:-
o Integer square root modulo a large prime
o Weaken Fiat–Shamir signatures`2
o Ong–Schnorr–Shamir signature is broken by Pollard
o Partial hash inversion
o Hash sequences
o Puzzles
o Diffie–Hellman–based puzzle
o Moderate
o Mbound
o Hokkaido
o Cuckoo Cycle
o Merkle tree-based
o Guided tour puzzle protocol
A successful attack on a blockchain network requires a lot of computational power and a lot of time to do the calculations. Proof of Work makes hacks inefficient since the cost incurred would be greater than the potential rewards for attacking the network. Miners are also incentivized not to cheat.
It is still considered as one of the most popular methods of reaching consensus in blockchains. Though it may not be the most efficient solution due to high energy extensive usage. But this is why it guarantees the security of the network.
Due to Proof of work, it is quite impossible to alter any aspect of the blockchain, since any such changes would require re-mining all those subsequent blocks. It is also difficult for a user to take control over the network computing power since the process requires high energy thus making these hash functions expensive.
submitted by RumaDas to u/RumaDas [link] [comments]

Calculate txn_id from raw txn_hex

I'm trying to calculate a txn_id from raw txn_hex. The procedure works fine for legacy TXNs but gets non-expected results on Segwit TXNs. I compared this snippet of code to what txn_id was produced by Electrum and the blockchain.com TXN decoder:
  1. Take in TXN in hex
  2. Convert the hex to binarray
  3. Double hash binarray
  4. Reverse the resultant digest because of endianness
  5. Display in hex.
t0 is my legacy testnet TXN and t1 is my segwit testnet TXN.
Thoughts?

UPDATE

Found the relevant source in Electrum transaction.py:1036
Basically you strip the flags and tx_witnesses listed in the wiki spec
```python

!/usbin/env python3

[repo] https://github.com/brianddk/reddit ... python/txn_hash.py

[ref] https://www.reddit.com/g4hvyf

from hashlib import sha256
def txid(tx): bin = bytes.fromhex(tx) txid = sha256(sha256(bin).digest()).digest()[::-1].hex() return txid

Raw Legacy

t0 = ('0200000001cd3b93f5b24ae190ce5141235091cd93fbb2908e24e5b9ff6776ae' 'c11b0e04e5000000006b4830450221009f156db3585c19fe8e294578edbf5b5e' '4159a7afc3a7a00ebaab080dc25ecb9702202581f8ae41d7ade2f06c9bb9869e' '42e9091bafe39290820438b97931dab61e140121030e669acac1f280d1ddf441' 'cd2ba5e97417bf2689e4bbec86df4f831bf9f7ffd0fdffffff010005d9010000' '00001976a91485eb47fe98f349065d6f044e27a4ac541af79ee288ac00000000')

Raw Segwit

t1 = ('0200000000010100ff121dd31ead0f06e3014d9192be8485afd6459e36b09179' 'd8c372c1c494e20000000000fdffffff013ba3bf070000000017a914051877a0' 'cc43165e48975c1e62bdef3b6c942a38870247304402205644234fa352d1ddbe' 'c754c863638d2c26abb9381966358ace8ad7c52dda4250022074d8501460f4e4' 'f5ca9788e60afafa1e1bcbf93e51529defa48317ad83e069dd012103adc58245' 'cf28406af0ef5cc24b8afba7f1be6c72f279b642d85c48798685f86200000000')

UPDATE Raw Segwit with flags and tx_witnesses stripped

t2 = ('02000000' '0100ff121dd31ead0f06e3014d9192be8485afd6459e36b09179' 'd8c372c1c494e20000000000fdffffff013ba3bf070000000017a914051877a0' 'cc43165e48975c1e62bdef3b6c942a3887' '00000000')
print(f"t0: {txid(t0)}\nt1: {txid(t1)}\nt2: {txid(t2)}")

TXN_IDs from the above python

t0: cb33472bcaed59c66fae30d7802b6ea2ca97dc33c6aad76ce2e553b1b4a4e017

t1: b11fdde7e3e635c7f15863a9399cca42d46b5a42d87f4e779dfd4806af2401ce

t2: d360581ee248be29da9636b3d2e9470d8852de1afcf3c3644770c1005d415b30

TXN_IDs from Electrum

t0: cb33472bcaed59c66fae30d7802b6ea2ca97dc33c6aad76ce2e553b1b4a4e017

t1: d360581ee248be29da9636b3d2e9470d8852de1afcf3c3644770c1005d415b30

```
submitted by brianddk to Bitcoin [link] [comments]

Need help with a coding solution to determine a users electric transmission rate

Hi all, i've built a web app for bitcoin miners, called the "Bitcoin Cost Estimator” that I would like to adapt for use in solar power. Unfortunately it's only half baked. It uses the NREL API to pull a users electric generation rate based on their address, but without pulling their transmission rate, it's only one half of the true data. I do understand there are intricacies involved here, not everybody pays the same rate for their electricity... utilities charge more if you use more, and it depends on time of day... i just want to get a more accurate number. It does have "estimator" in the title, but as of now it has no way of calculating the transmission charges, only generation... Can anybody help me find an API or an algorithmic method that would allow me to grab the transmission rates by address, zip code, latitude longitude... or some other criteria?
Before someone says something like “just input your rate per KwH” let me tell you, I meet 2-3 people per day who have no clue what their true electric rate is. I sell solar power, and i look at a lot of electric bills, let me give you an example.
Im looking at an Eversource bill right now, displaying a generation rate of 8.53 cents per kilowatt hour, this customer used 1310 kilowatt hours in a month, and has an electric bill of 247.38... if you do the math, 8.53 cents * 1310 does not equal 247.48, it equals 111.74, so often times I’ll walk into a customers home and they say "what are you offering for an electric rate? Because I pay 8.5 cents" and i have to explain to them that with a $247.38 bill, at 1310 kilowatts, their electric rate is closer 18.88 cents. (247.38/1310 = 18.88) if i deduct the connection charge (23.75) we get 223.63, which would actually be a better number to do the math on, because you will always pay a connection fee regardless, even if you unplug everything and use no electricity for the month, you will always pay a service charge just to be connected to the grid. sooo... 223.63/1310 = 17.07 cents/KwH.
Looking over countless bills, I can tell you that number is always somewhere between 17 and 20 cents here in my state, so that's the degree of accuracy i'm looking for... its not 100% accurate, its a spread, but it's far more accurate than doing the math at 8.5 cents. The delivery rates (at least here in CT) are also broken down by kilowatt hour. When i look at the number of people who think their electric rate is something other than what it is, it gives me pause looking at bitcoin mining calculators... How many CT residents are looking at a bitcoin mining calculator and entering 8.5 cents per KwH instead of 17-20 cents per KwH? (I mean obviously nobody is mining bitcoin in CT but still...)
when it comes to delivery the customer on this bill pays the utility company...
2.6 cents/KwH --- transmission charge
3.4 cents/KwH --- distributed charge per KwH
.13 cents/KwH --- Electric system improvement
.2 cents/KwH --- revenue adjustment mechanism
.015 cents/KwH --- CTA charge per KwH
1.08 cents/KwH --- FMCC delivery charge
1.02 cents/KwH --- combined public benefits charge
(These are the pieces missing from my code)
These combined add around 8.5 cents/KwH to his total electric rate... 8.5+8.5 = 17 cents. The connection charge is the only part of the bill not directly linked to KwH usage.
I don't know what the structure is for other bills in other utility districts, but I assume the dataset exists, cause i'm looking at it on my bill, and I look at it on my customers bills every day.
I do have a brute force solution, which would involve looking at a utility bill for every district in the country, and hard coding all of those charges i see from sample bills into the algorithm... the obvious problem with this is that as utility companies raise their rates, I would have to stay on top of that and frequently edit the code...
My algorithm is to collect the users address, convert it to longitude/latitude for my API to digest, then make a get request to the API for the generation rate (not transmission rate) at those coordinates.
Then the bitcoin math... Not to go too in-depth, but it pulls the hashrate, calculates how many KwH needed for any given miner to hash enough to mine one bitcoin (hashes to mine 1 block divided by 12, I understand bitcoin are mined by the block, it’s an abstraction) etc etc.
The point here is to build something idiot proof... there is a large population of people who are incapable of entering their price per kilowatt hour into a calculator, because they literally do not know what their price per KwH is...they just think that they do. They think it’s half of what it is. This demographic makes up about 50-60% of people I meet who want to go solar. (This may be unbelievable to coders, but not to a solar salesman)
I know bitcoin miners are smarter than your average Joe, but people who want to save on their electric bill are generally not well versed in electricity, and I would like my algorithm to have applications beyond the crypto space.
Furthermore, there’s a million bitcoin mining calculator that ask you to put in your price per KwH... the idea here is to build something different.
Not interested in answers along the line of “can’t be done” or “why not just input your price per KwH like any other bitcoin calculator”, so if that’s all you’ve got for me, please save your keystrokes.
submitted by Baconbits1204 to webdev [link] [comments]

Jiangzhuoer: CSW's Three Extreme Claims - [BitKan 1v1] Craig Wright vs Jiangzhuoer

Jiangzhuoer: CSW's Three Extreme Claims - [BitKan 1v1] Craig Wright vs Jiangzhuoer
Digest from [BitKan 1v1] debate.
bitkan.pro aggregates all trading depth of Binance Huobi and OKEx. or Try our APP!
https://preview.redd.it/ohaz6a5lkoc31.png?width=1058&format=png&auto=webp&s=826957a79fe4fa6e66f2565cbe265cc5e7c3b772
Question 2: During the BCH fork to BSV hash war, why do you support BCH? What do you think of the differences between BSV and BCH?
Jiang: First of all, we have to figure out how did some of the key propositions of BSV came about. CSW seems to be the leader of the BSV community, but in fact CSW is just a chess piece. For example, CSW is in name the chief scientist of Nchain, but CSW has no shares in a series of BSV related companies such as Nchain, Coingeek etc. The true boss of BSV and the main backer behind CSW is Calvin Ayre, the casino tycoon.
Zhao Nan wrote two articles, which made the cause and effect of CA's capital layout clear:
"The capital layout of the casino tycoon Calvin Ayre" >>(Chinese)
"The ins and outs of the Calvin Ayre team" >>(Chinese)
Therefore, the ultimate goal of Calvin Ayre is to make money from the Canadian stock market through Coingeek. Coingeek develops its own mining machine, mines itself, controls the chain of BSV, and has the "CSW" as the gimmick, to tell us the story of BSV.

So BCH forks the BSV, which is a step in the entire capital layout of Calvin Ayre. It is not because there is any irreconcilable development direction, but because Coingeek needs to control the BCH. If it cannot be controlled, it will split into a chain that Coingeek can control completely. The whole thing is planned in advance, for example, bitcoinsv.org registration date is July 2, 2018, bitcoinsv.io is August 16, long before CSW began firing shots at ABC team.
CSW’s goal is to split the BSV from the BCH, so he must overstate many of his claims in order to create a split. If he puts forward a reasonable claim and BCH is a rational and pragmatic community, then he can't split. It is important to mention some very extreme claims that the BCH community can't accept, and then incite some community members through extremist claims, just like the Nazis do extreme propaganda and incitement, in order to split from the BCH.

CSW's extreme claims, such as:
1 Super block: BCH advocates large block expansion. What about CSW? He demands to upgrade the oversized block in a short time. The BCH 32MB block is sufficient and does not exceed the network load. CSW exerts that he will upgrade 128MB now. He will not wait till next year, and he intends to upgrade to 2g as well in 2019.
But the result? Don't even talk about 2G, the 100M block has exceeded the current network carrying capacity. After the BSV, because the block is too large, it is too late to spread across the entire network. There have been many deep rollbacks, April 18, 2019. At that time, the 578640 height 128M block resulted in 6 confirmed rollbacks, making the 6 confirmations unreliable.
On April 18, 2019, Beijing time, from 21:00 to 22:00, the deep recombination of up to six blocks occurred in the cobwebs of BSV (block height 578640-578645)

https://preview.redd.it/7winlisnkoc31.png?width=1124&format=png&auto=webp&s=1c766e14d6360f869006b918b3e7d2a25b9b5fe4
According to BitMEX Research, the BSV chain was rolled back by two blocks in the week. One of the orphaned blocks was about 62.6MB in size. This large block may be the cause of the roll back. In addition, BSV plans to launch an upgraded network called Quasar on July 24. The only change to this upgrade is to increase the default block size limit. It is reported that the expansion of block capacity will increase the probability of block reorganization: the large block has not yet been packaged, and multiple small blocks have made the block height overtaking, which will lead to block reorganization or even fork.

2 Lock-up agreement: A chain must stabilize the agreement. The agreement is greatly changed every time. It definitely affects the above development. If CSW proposes a stable agreement, then everyone agrees that he can't split it. What should he do? CSW is even more extreme, and I am going to set the protocol and lock it, even back to the original version of Bitcoin, which is ridiculous.
The environment has changed, and the agreement must change. For example, if the 0.1 version of Bitcoin is perfect, and the 14-day difficulty adjustment is not a defect, the BSV will not remove the BCH “not original” DDA difficulty adjustment algorithm, and switch back to 14 Day difficulty adjustment? Because once the BSV removes the BCH DDA difficulty adjustment algorithm, it will be directly cut and killed by the big calculation.

3 Computing power determines everything: Why does CW have the power to decide everything? Because the extremes did not dominate the community at the time, but CA's coingeek deployed a lot of mining machines to mine, which is very computationally intensive, so he advocated Force to decide everything, of course, he did not know that my calculations were more than him. I will talk about this later.
Because these claims are created for splitting, not natural development, so these claims will be internal contradictions. For example, CSW said that the agreement is to be locked, and that the computing power determines everything. Even decided to increase the total amount of 21 million, then who has the final say?

Why don't I support the development path of BSV? Because these extreme claims of CSW are all for the purpose of splitting, purposefully proposed, whether it is a large block, lock-up agreement, power calculation determines everything, in fact, it can not be implemented, of course, Will not support these extreme claims that can't actually fall.
In addition, these extreme claims will become a heavy liability for the development of BSV in the future. It is necessary to develop according to these extreme claims. In fact, we cannot do this. We must revise these extreme claims. The members of the community who were incited by these extreme claims will definitely not do it. Then, how do you say that BSV is still developing?

Digest from [BitKan 1v1] debate.
bitkan.pro aggregates all trading depth of Binance Huobi and OKEx. or Try our APP!
submitted by BitKan to btc [link] [comments]

Merkle Trees and Mountain Ranges - Making UTXO Set Growth Irrelevant With Low-Latency Delayed TXO Commitments

Original link: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-May/012715.html
Unedited text and originally written by:

Peter Todd pete at petertodd.org
Tue May 17 13:23:11 UTC 2016
Previous message: [bitcoin-dev] Bip44 extension for P2SH/P2WSH/...
Next message: [bitcoin-dev] Making UTXO Set Growth Irrelevant With Low-Latency Delayed TXO Commitments
Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
# Motivation

UTXO growth is a serious concern for Bitcoin's long-term decentralization. To
run a competitive mining operation potentially the entire UTXO set must be in
RAM to achieve competitive latency; your larger, more centralized, competitors
will have the UTXO set in RAM. Mining is a zero-sum game, so the extra latency
of not doing so if they do directly impacts your profit margin. Secondly,
having possession of the UTXO set is one of the minimum requirements to run a
full node; the larger the set the harder it is to run a full node.

Currently the maximum size of the UTXO set is unbounded as there is no
consensus rule that limits growth, other than the block-size limit itself; as
of writing the UTXO set is 1.3GB in the on-disk, compressed serialization,
which expands to significantly more in memory. UTXO growth is driven by a
number of factors, including the fact that there is little incentive to merge
inputs, lost coins, dust outputs that can't be economically spent, and
non-btc-value-transfer "blockchain" use-cases such as anti-replay oracles and
timestamping.

We don't have good tools to combat UTXO growth. Segregated Witness proposes to
give witness space a 75% discount, in part of make reducing the UTXO set size
by spending txouts cheaper. While this may change wallets to more often spend
dust, it's hard to imagine an incentive sufficiently strong to discourage most,
let alone all, UTXO growing behavior.

For example, timestamping applications often create unspendable outputs due to
ease of implementation, and because doing so is an easy way to make sure that
the data required to reconstruct the timestamp proof won't get lost - all
Bitcoin full nodes are forced to keep a copy of it. Similarly anti-replay
use-cases like using the UTXO set for key rotation piggyback on the uniquely
strong security and decentralization guarantee that Bitcoin provides; it's very
difficult - perhaps impossible - to provide these applications with
alternatives that are equally secure. These non-btc-value-transfer use-cases
can often afford to pay far higher fees per UTXO created than competing
btc-value-transfer use-cases; many users could afford to spend $50 to register
a new PGP key, yet would rather not spend $50 in fees to create a standard two
output transaction. Effective techniques to resist miner censorship exist, so
without resorting to whitelists blocking non-btc-value-transfer use-cases as
"spam" is not a long-term, incentive compatible, solution.

A hard upper limit on UTXO set size could create a more level playing field in
the form of fixed minimum requirements to run a performant Bitcoin node, and
make the issue of UTXO "spam" less important. However, making any coins
unspendable, regardless of age or value, is a politically untenable economic
change.


# TXO Commitments

A merkle tree committing to the state of all transaction outputs, both spent
and unspent, we can provide a method of compactly proving the current state of
an output. This lets us "archive" less frequently accessed parts of the UTXO
set, allowing full nodes to discard the associated data, still providing a
mechanism to spend those archived outputs by proving to those nodes that the
outputs are in fact unspent.

Specifically TXO commitments proposes a Merkle Mountain Range¹ (MMR), a
type of deterministic, indexable, insertion ordered merkle tree, which allows
new items to be cheaply appended to the tree with minimal storage requirements,
just log2(n) "mountain tips". Once an output is added to the TXO MMR it is
never removed; if an output is spent its status is updated in place. Both the
state of a specific item in the MMR, as well the validity of changes to items
in the MMR, can be proven with log2(n) sized proofs consisting of a merkle path
to the tip of the tree.

At an extreme, with TXO commitments we could even have no UTXO set at all,
entirely eliminating the UTXO growth problem. Transactions would simply be
accompanied by TXO commitment proofs showing that the outputs they wanted to
spend were still unspent; nodes could update the state of the TXO MMR purely
from TXO commitment proofs. However, the log2(n) bandwidth overhead per txin is
substantial, so a more realistic implementation is be to have a UTXO cache for
recent transactions, with TXO commitments acting as a alternate for the (rare)
event that an old txout needs to be spent.

Proofs can be generated and added to transactions without the involvement of
the signers, even after the fact; there's no need for the proof itself to
signed and the proof is not part of the transaction hash. Anyone with access to
TXO MMR data can (re)generate missing proofs, so minimal, if any, changes are
required to wallet software to make use of TXO commitments.


## Delayed Commitments

TXO commitments aren't a new idea - the author proposed them years ago in
response to UTXO commitments. However it's critical for small miners' orphan
rates that block validation be fast, and so far it has proven difficult to
create (U)TXO implementations with acceptable performance; updating and
recalculating cryptographicly hashed merkelized datasets is inherently more
work than not doing so. Fortunately if we maintain a UTXO set for recent
outputs, TXO commitments are only needed when spending old, archived, outputs.
We can take advantage of this by delaying the commitment, allowing it to be
calculated well in advance of it actually being used, thus changing a
latency-critical task into a much easier average throughput problem.

Concretely each block B_i commits to the TXO set state as of block B_{i-n}, in
other words what the TXO commitment would have been n blocks ago, if not for
the n block delay. Since that commitment only depends on the contents of the
blockchain up until block B_{i-n}, the contents of any block after are
irrelevant to the calculation.


## Implementation

Our proposed high-performance/low-latency delayed commitment full-node
implementation needs to store the following data:

1) UTXO set

Low-latency K:V map of txouts definitely known to be unspent. Similar to
existing UTXO implementation, but with the key difference that old,
unspent, outputs may be pruned from the UTXO set.


2) STXO set

Low-latency set of transaction outputs known to have been spent by
transactions after the most recent TXO commitment, but created prior to the
TXO commitment.


3) TXO journal

FIFO of outputs that need to be marked as spent in the TXO MMR. Appends
must be low-latency; removals can be high-latency.


4) TXO MMR list

Prunable, ordered list of TXO MMR's, mainly the highest pending commitment,
backed by a reference counted, cryptographically hashed object store
indexed by digest (similar to how git repos work). High-latency ok. We'll
cover this in more in detail later.


### Fast-Path: Verifying a Txout Spend In a Block

When a transaction output is spent by a transaction in a block we have two
cases:

1) Recently created output

Output created after the most recent TXO commitment, so it should be in the
UTXO set; the transaction spending it does not need a TXO commitment proof.
Remove the output from the UTXO set and append it to the TXO journal.

2) Archived output

Output created prior to the most recent TXO commitment, so there's no
guarantee it's in the UTXO set; transaction will have a TXO commitment
proof for the most recent TXO commitment showing that it was unspent.
Check that the output isn't already in the STXO set (double-spent), and if
not add it. Append the output and TXO commitment proof to the TXO journal.

In both cases recording an output as spent requires no more than two key:value
updates, and one journal append. The existing UTXO set requires one key:value
update per spend, so we can expect new block validation latency to be within 2x
of the status quo even in the worst case of 100% archived output spends.


### Slow-Path: Calculating Pending TXO Commitments

In a low-priority background task we flush the TXO journal, recording the
outputs spent by each block in the TXO MMR, and hashing MMR data to obtain the
TXO commitment digest. Additionally this background task removes STXO's that
have been recorded in TXO commitments, and prunes TXO commitment data no longer
needed.

Throughput for the TXO commitment calculation will be worse than the existing
UTXO only scheme. This impacts bulk verification, e.g. initial block download.
That said, TXO commitments provides other possible tradeoffs that can mitigate
impact of slower validation throughput, such as skipping validation of old
history, as well as fraud proof approaches.


### TXO MMR Implementation Details

Each TXO MMR state is a modification of the previous one with most information
shared, so we an space-efficiently store a large number of TXO commitments
states, where each state is a small delta of the previous state, by sharing
unchanged data between each state; cycles are impossible in merkelized data
structures, so simple reference counting is sufficient for garbage collection.
Data no longer needed can be pruned by dropping it from the database, and
unpruned by adding it again. Since everything is committed to via cryptographic
hash, we're guaranteed that regardless of where we get the data, after
unpruning we'll have the right data.

Let's look at how the TXO MMR works in detail. Consider the following TXO MMR
with two txouts, which we'll call state #0:

0
/ \
a b

If we add another entry we get state #1:

1
/ \
0 \
/ \ \
a b c

Note how it 100% of the state #0 data was reused in commitment #1. Let's
add two more entries to get state #2:

2
/ \
2 \
/ \ \
/ \ \
/ \ \
0 2 \
/ \ / \ \
a b c d e

This time part of state #1 wasn't reused - it's wasn't a perfect binary
tree - but we've still got a lot of re-use.

Now suppose state #2 is committed into the blockchain by the most recent block.
Future transactions attempting to spend outputs created as of state #2 are
obliged to prove that they are unspent; essentially they're forced to provide
part of the state #2 MMR data. This lets us prune that data, discarding it,
leaving us with only the bare minimum data we need to append new txouts to the
TXO MMR, the tips of the perfect binary trees ("mountains") within the MMR:

2
/ \
2 \
\
\
\
\
\
e

Note that we're glossing over some nuance here about exactly what data needs to
be kept; depending on the details of the implementation the only data we need
for nodes "2" and "e" may be their hash digest.

Adding another three more txouts results in state #3:

3
/ \
/ \
/ \
/ \
/ \
/ \
/ \
2 3
/ \
/ \
/ \
3 3
/ \ / \
e f g h

Suppose recently created txout f is spent. We have all the data required to
update the MMR, giving us state #4. It modifies two inner nodes and one leaf
node:

4
/ \
/ \
/ \
/ \
/ \
/ \
/ \
2 4
/ \
/ \
/ \
4 3
/ \ / \
e (f) g h

If an archived txout is spent requires the transaction to provide the merkle
path to the most recently committed TXO, in our case state #2. If txout b is
spent that means the transaction must provide the following data from state #2:

2
/
2
/
/
/
0
\
b

We can add that data to our local knowledge of the TXO MMR, unpruning part of
it:

4
/ \
/ \
/ \
/ \
/ \
/ \
/ \
2 4
/ / \
/ / \
/ / \
0 4 3
\ / \ / \
b e (f) g h

Remember, we haven't _modified_ state #4 yet; we just have more data about it.
When we mark txout b as spent we get state #5:

5
/ \
/ \
/ \
/ \
/ \
/ \
/ \
5 4
/ / \
/ / \
/ / \
5 4 3
\ / \ / \
(b) e (f) g h

Secondly by now state #3 has been committed into the chain, and transactions
that want to spend txouts created as of state #3 must provide a TXO proof
consisting of state #3 data. The leaf nodes for outputs g and h, and the inner
node above them, are part of state #3, so we prune them:

5
/ \
/ \
/ \
/ \
/ \
/ \
/ \
5 4
/ /
/ /
/ /
5 4
\ / \
(b) e (f)

Finally, lets put this all together, by spending txouts a, c, and g, and
creating three new txouts i, j, and k. State #3 was the most recently committed
state, so the transactions spending a and g are providing merkle paths up to
it. This includes part of the state #2 data:

3
/ \
/ \
/ \
/ \
/ \
/ \
/ \
2 3
/ \ \
/ \ \
/ \ \
0 2 3
/ / /
a c g

After unpruning we have the following data for state #5:

5
/ \
/ \
/ \
/ \
/ \
/ \
/ \
5 4
/ \ / \
/ \ / \
/ \ / \
5 2 4 3
/ \ / / \ /
a (b) c e (f) g

That's sufficient to mark the three outputs as spent and add the three new
txouts, resulting in state #6:

6
/ \
/ \
/ \
/ \
/ \
6 \
/ \ \
/ \ \
/ \ \
/ \ \
/ \ \
/ \ \
/ \ \
6 6 \
/ \ / \ \
/ \ / \ 6
/ \ / \ / \
6 6 4 6 6 \
/ \ / / \ / / \ \
(a) (b) (c) e (f) (g) i j k

Again, state #4 related data can be pruned. In addition, depending on how the
STXO set is implemented may also be able to prune data related to spent txouts
after that state, including inner nodes where all txouts under them have been
spent (more on pruning spent inner nodes later).


### Consensus and Pruning

It's important to note that pruning behavior is consensus critical: a full node
that is missing data due to pruning it too soon will fall out of consensus, and
a miner that fails to include a merkle proof that is required by the consensus
is creating an invalid block. At the same time many full nodes will have
significantly more data on hand than the bare minimum so they can help wallets
make transactions spending old coins; implementations should strongly consider
separating the data that is, and isn't, strictly required for consensus.

A reasonable approach for the low-level cryptography may be to actually treat
the two cases differently, with the TXO commitments committing too what data
does and does not need to be kept on hand by the UTXO expiration rules. On the
other hand, leaving that uncommitted allows for certain types of soft-forks
where the protocol is changed to require more data than it previously did.


### Consensus Critical Storage Overheads

Only the UTXO and STXO sets need to be kept on fast random access storage.
Since STXO set entries can only be created by spending a UTXO - and are smaller
than a UTXO entry - we can guarantee that the peak size of the UTXO and STXO
sets combined will always be less than the peak size of the UTXO set alone in
the existing UTXO-only scheme (though the combined size can be temporarily
higher than what the UTXO set size alone would be when large numbers of
archived txouts are spent).

TXO journal entries and unpruned entries in the TXO MMR have log2(n) maximum
overhead per entry: a unique merkle path to a TXO commitment (by "unique" we
mean that no other entry shares data with it). On a reasonably fast system the
TXO journal will be flushed quickly, converting it into TXO MMR data; the TXO
journal will never be more than a few blocks in size.

Transactions spending non-archived txouts are not required to provide any TXO
commitment data; we must have that data on hand in the form of one TXO MMR
entry per UTXO. Once spent however the TXO MMR leaf node associated with that
non-archived txout can be immediately pruned - it's no longer in the UTXO set
so any attempt to spend it will fail; the data is now immutable and we'll never
need it again. Inner nodes in the TXO MMR can also be pruned if all leafs under
them are fully spent; detecting this is easy the TXO MMR is a merkle-sum tree,
with each inner node committing to the sum of the unspent txouts under it.

When a archived txout is spent the transaction is required to provide a merkle
path to the most recent TXO commitment. As shown above that path is sufficient
information to unprune the necessary nodes in the TXO MMR and apply the spend
immediately, reducing this case to the TXO journal size question (non-consensus
critical overhead is a different question, which we'll address in the next
section).

Taking all this into account the only significant storage overhead of our TXO
commitments scheme when compared to the status quo is the log2(n) merkle path
overhead; as long as less than 1/log2(n) of the UTXO set is active,
non-archived, UTXO's we've come out ahead, even in the unrealistic case where
all storage available is equally fast. In the real world that isn't yet the
case - even SSD's significantly slower than RAM.


### Non-Consensus Critical Storage Overheads

Transactions spending archived txouts pose two challenges:

1) Obtaining up-to-date TXO commitment proofs

2) Updating those proofs as blocks are mined

The first challenge can be handled by specialized archival nodes, not unlike
how some nodes make transaction data available to wallets via bloom filters or
the Electrum protocol. There's a whole variety of options available, and the
the data can be easily sharded to scale horizontally; the data is
self-validating allowing horizontal scaling without trust.

While miners and relay nodes don't need to be concerned about the initial
commitment proof, updating that proof is another matter. If a node aggressively
prunes old versions of the TXO MMR as it calculates pending TXO commitments, it
won't have the data available to update the TXO commitment proof to be against
the next block, when that block is found; the child nodes of the TXO MMR tip
are guaranteed to have changed, yet aggressive pruning would have discarded that
data.

Relay nodes could ignore this problem if they simply accept the fact that
they'll only be able to fully relay the transaction once, when it is initially
broadcast, and won't be able to provide mempool functionality after the initial
relay. Modulo high-latency mixnets, this is probably acceptable; the author has
previously argued that relay nodes don't need a mempool² at all.

For a miner though not having the data necessary to update the proofs as blocks
are found means potentially losing out on transactions fees. So how much extra
data is necessary to make this a non-issue?

Since the TXO MMR is insertion ordered, spending a non-archived txout can only
invalidate the upper nodes in of the archived txout's TXO MMR proof (if this
isn't clear, imagine a two-level scheme, with a per-block TXO MMRs, committed
by a master MMR for all blocks). The maximum number of relevant inner nodes
changed is log2(n) per block, so if there are n non-archival blocks between the
most recent TXO commitment and the pending TXO MMR tip, we have to store
log2(n)*n inner nodes - on the order of a few dozen MB even when n is a
(seemingly ridiculously high) year worth of blocks.

Archived txout spends on the other hand can invalidate TXO MMR proofs at any
level - consider the case of two adjacent txouts being spent. To guarantee
success requires storing full proofs. However, they're limited by the blocksize
limit, and additionally are expected to be relatively uncommon. For example, if
1% of 1MB blocks was archival spends, our hypothetical year long TXO commitment
delay is only a few hundred MB of data with low-IO-performance requirements.


## Security Model

Of course, a TXO commitment delay of a year sounds ridiculous. Even the slowest
imaginable computer isn't going to need more than a few blocks of TXO
commitment delay to keep up ~100% of the time, and there's no reason why we
can't have the UTXO archive delay be significantly longer than the TXO
commitment delay.

However, as with UTXO commitments, TXO commitments raise issues with Bitcoin's
security model by allowing relatively miners to profitably mine transactions
without bothering to validate prior history. At the extreme, if there was no
commitment delay at all at the cost of a bit of some extra network bandwidth
"full" nodes could operate and even mine blocks completely statelessly by
expecting all transactions to include "proof" that their inputs are unspent; a
TXO commitment proof for a commitment you haven't verified isn't a proof that a
transaction output is unspent, it's a proof that some miners claimed the txout
was unspent.

At one extreme, we could simply implement TXO commitments in a "virtual"
fashion, without miners actually including the TXO commitment digest in their
blocks at all. Full nodes would be forced to compute the commitment from
scratch, in the same way they are forced to compute the UTXO state, or total
work. Of course a full node operator who doesn't want to verify old history can
get a copy of the TXO state from a trusted source - no different from how you
could get a copy of the UTXO set from a trusted source.

A more pragmatic approach is to accept that people will do that anyway, and
instead assume that sufficiently old blocks are valid. But how old is
"sufficiently old"? First of all, if your full node implementation comes "from
the factory" with a reasonably up-to-date minimum accepted total-work
thresholdⁱ - in other words it won't accept a chain with less than that amount
of total work - it may be reasonable to assume any Sybil attacker with
sufficient hashing power to make a forked chain meeting that threshold with,
say, six months worth of blocks has enough hashing power to threaten the main
chain as well.

That leaves public attempts to falsify TXO commitments, done out in the open by
the majority of hashing power. In this circumstance the "assumed valid"
threshold determines how long the attack would have to go on before full nodes
start accepting the invalid chain, or at least, newly installed/recently reset
full nodes. The minimum age that we can "assume valid" is tradeoff between
political/social/technical concerns; we probably want at least a few weeks to
guarantee the defenders a chance to organise themselves.

With this in mind, a longer-than-technically-necessary TXO commitment delayʲ
may help ensure that full node software actually validates some minimum number
of blocks out-of-the-box, without taking shortcuts. However this can be
achieved in a wide variety of ways, such as the author's prev-block-proof
proposal³, fraud proofs, or even a PoW with an inner loop dependent on
blockchain data. Like UTXO commitments, TXO commitments are also potentially
very useful in reducing the need for SPV wallet software to trust third parties
providing them with transaction data.

i) Checkpoints that reject any chain without a specific block are a more
common, if uglier, way of achieving this protection.

j) A good homework problem is to figure out how the TXO commitment could be
designed such that the delay could be reduced in a soft-fork.


## Further Work

While we've shown that TXO commitments certainly could be implemented without
increasing peak IO bandwidth/block validation latency significantly with the
delayed commitment approach, we're far from being certain that they should be
implemented this way (or at all).

1) Can a TXO commitment scheme be optimized sufficiently to be used directly
without a commitment delay? Obviously it'd be preferable to avoid all the above
complexity entirely.

2) Is it possible to use a metric other than age, e.g. priority? While this
complicates the pruning logic, it could use the UTXO set space more
efficiently, especially if your goal is to prioritise bitcoin value-transfer
over other uses (though if "normal" wallets nearly never need to use TXO
commitments proofs to spend outputs, the infrastructure to actually do this may
rot).

3) Should UTXO archiving be based on a fixed size UTXO set, rather than an
age/priority/etc. threshold?

4) By fixing the problem (or possibly just "fixing" the problem) are we
encouraging/legitimising blockchain use-cases other than BTC value transfer?
Should we?

5) Instead of TXO commitment proofs counting towards the blocksize limit, can
we use a different miner fairness/decentralization metric/incentive? For
instance it might be reasonable for the TXO commitment proof size to be
discounted, or ignored entirely, if a proof-of-propagation scheme (e.g.
thinblocks) is used to ensure all miners have received the proof in advance.

6) How does this interact with fraud proofs? Obviously furthering dependency on
non-cryptographically-committed STXO/UTXO databases is incompatible with the
modularized validation approach to implementing fraud proofs.


# References

1) "Merkle Mountain Ranges",
Peter Todd, OpenTimestamps, Mar 18 2013,
https://github.com/opentimestamps/opentimestamps-serveblob/mastedoc/merkle-mountain-range.md

2) "Do we really need a mempool? (for relay nodes)",
Peter Todd, bitcoin-dev mailing list, Jul 18th 2015,
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-July/009479.html

3) "Segregated witnesses and validationless mining",
Peter Todd, bitcoin-dev mailing list, Dec 23rd 2015,
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-Decembe012103.html

--
https://petertodd.org 'peter'[:-1]@petertodd.org
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 455 bytes
Desc: Digital signature
URL: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/20160517/33f69665/attachment-0001.sig>
submitted by Godballz to CryptoTechnology [link] [comments]

Is it possible for bitcoin (or any virtual currency) to be based on useful computationally useful computations? Sorry for the confusing wording.

From what I understand, and I admit I don't understand it very well - bitcoin is "mined" and this mining is difficult, and gets more difficult as time goes on.
I see parallels between this and projects like [email protected] and [email protected].
The difference, of course is that one generates bitcoins and the others generate data for well known projects.
I am curious - is it possible (in theory) for currency to be generated in such a way as these distributed computing projects can somehow generate bitcoin (or a similar cryptocurrency) or is this infeasible?
I have a very limited understanding - so please be gentle.
I am willing to read more to broaden my understanding though.
Some links I found:
https://en.bitcoin.it/wiki/Intrinsic_worth_brainstorming
bitcoin.stackexchange.com/questions/2638/merge-bitcoin-mining-with-fold-at-home
http://bitcoin.stackexchange.com/questions/331/is-there-a-way-to-set-up-proof-of-work-systems-so-it-would-be-even-more-useful
Earlier threads on the same subreddit: http://www.reddit.com/Bitcoin/comments/13k54d/foldinghome_work_as_a_currency_could_work_bitcoin/
http://www.reddit.com/Bitcoin/comments/1fngc6/in_regards_to_foldinghome/
submitted by howbigis1gb to Bitcoin [link] [comments]

"POS stands for the future? Qtum brings deep analysis"

Each cryptocurrency will adopt some kind of consensus mechanism so that the entire distributed network can maintain synchronization. Bitcoin adopted the Proof of Work (PoW) consensus mechanism from the very beginning of its birth to achieve proof of workload through continuous digital cryptographic hash operations. Since the hashing algorithm is unidirectional, even a small change in the input data will make the output hash value completely different. If the calculated hash value satisfies certain conditions (referred to as "mining difficulty"), participants in the bitcoin network identify the workload proof. Mining difficulty is an ever-changing hash target. When the speed of network-generated blocks becomes faster, the difficulty is automatically increased to maintain the average of the entire network every 10 minutes.
 
Definition
For those who are not very familiar with the blockchain, here are some basic definitions to help understand the post:
 
PoW and Blockchain Consensus System
Through 8 years of development of Bitcoin, the security of the PoW mechanism has been confirmed. However, PoW has the following problems:
 
  1. PoW has wasted a lot of power resources and is not friendly to the environment;
  2. PoW is only economically advantageous for big people who have a lot of power (normal users can hardly mine into mines);
  3. PoW lacks incentives for users to hold or use coins;
  4. PoW has a certain risk of centralization, because miners tend to join large pools, which makes large pools have a greater influence on the network;
 
The right to benefit prove mechanism (Proof of Stake, hereinafter referred to as PoS) can solve a lot of problems among this, because it enables any user with tokens in your wallet can have the opportunity to dig mine (of course, will get mining reward). The PoS was originally proposed by Sunny King in Peercoin. It was later refined and adopted in a variety of cryptocurrencies. Among these are PoS Vasin's PoS 2.0, Larry Ren's PoS Velocity, and the recent CASPER proposed by Vlad Zamfir, as well as various other relatively unknown projects.
 
The consensus mechanism adopted by Qtum is based on PoS3.0. PoS3.0 is an upgraded version of PoS2.0, also proposed and implemented by Pavel Vasin. This article will focus on this version of the PoS implementation. Qtum made some changes based on PoS3.0, but the core consensus mechanism is basically the same.
 
For general community members and even some developers, PoS is not particularly easy to understand because there are currently fewer documents detailing how to ensure network security in networks that use only token ownership to achieve consensus. This article will elaborate on how to generate, verify, and secure the PoS blockchain in PoS3.0. The article may involve some technical knowledge, but I will try to describe it with some of the basic definitions provided in this article. But at least the reader needs to have a basic idea of ​​a UTXO-based blockchain.
 
Before introducing PoS, let me briefly introduce PoW's working mechanism, which can help the following understanding of PoS. The PoW mining process can be represented by the following pseudocode:  
While(blockhash > difficulty) { Block.nonce = block.nonce + 1 Blockhash = sha256(sha256(block)) } 
 
The hash operation used here I explained earlier, that is, to use arbitrary length data as input, after a series of operations, get a fixed-length information digest as an output, but only know the information digest but it is impossible to reverse the corresponding input data . The whole process is a lot like the lottery winning mechanism. You can create a “voucher” by hashing the data and compare it with the target hash range to determine if you “win”. If you don't win, you can create a new "voucher" again by slightly changing some of the data. The random number nonce in Bitcoin is used to adjust the input data. Once the required hash is found, the block is legitimate and can be broadcast to a distributed network. Once the other miners in the network receive this new block message and pass the verification, they will add the block to the chain and continue to build the block after the new block.
 
PoS protocol structure and rules
 
Now we begin to introduce PoS. PoS has the following goals :
  1. Cannot fake blocks;
  2. "Large households" will not receive much disproportionately large rewards;
  3. Having strong computing power does not help create blocks;
  4. No one or several members of the network can control the entire blockchain;
The basic concept of PoS is very similar to PoW, and it is like a lottery. The only difference is that PoS can't get new "lotteries" just by fine-tuning the input data, PoW uses "block hash" as lottery ticket, and PoS introduces the concept of "kernel hash".
The Kernel hash takes as input multiple unmodifiable data in the current block. So, because the miners can't find a simple way to modify the kernal hash, they can't get legal through a lot of traversal of the possible hash.New block.
 
In order to achieve this goal, PoS added many additional consensus rules.
First, unlike PoW, the PoS's coinbase transaction (that is, the first transaction in the block) has zero output. At the same time, in order to reward Staker, a staking transaction was introduced as the second transaction of the block. The staking transaction has the following features:
  1. There are at least 1 legal vin
  2. The first vout must be empty script
  3. The second vout must not be empty
 
In addition, staking transactions must also obey the following rules :
  1. The second vout must be a pubkey script (note that it is not pubkeyhash) or an OP_RETURN script that cannot be used to save data on the chain;
  2. The timestamp in the transaction must be consistent with the block timestamp;
  3. The total output value of the staking transaction must be less than or equal to the sum of all input values, PoS block awards, and transaction fees (ie output <= (input + block_reward + tx_fees));
  4. The output corresponding to the first vin must pass the confirmation of at least 500 blocks (that is, the currency spent needs at least 500 blocks to confirm);
  5. Although the staking transaction can have multiple input vins, only the first vin is used for the consensus mechanism;
 
These rules make it easy to identify the staking transaction, thus ensuring that it can provide enough information to verify the block. It should be noted here that the first vout is not the only way to identify the staking transaction, but since the PoS3.0 designer Sunny King started using this method, and proved its reliability in long-term practice, so we have also adopted this method to identify staking transactions.
 
Now that we know the definition of the staking transaction and we understand the rules that it must follow, let's introduce the rules of the PoS block :
 
The most important of these rules for PoS is the "kernal hash". The role of the kernel hash is similar to that of the block hash in PoW. That is, if the hash value matches the condition, the block is considered valid. However, kernal hash cannot be obtained by directly modifying part of the current block. Next, I will first introduce the structure and operating mechanism of kernal hash, and then further explain the purpose of this design, and if you change the unforeseen consequences of this design will bring.
 
Kernel Hash in PoS
The kernal hash consists of the following data in order as input:
 
The "skate modifier" of a block refers to the hash value of the following data:
There are only two ways to change the current kernel hash (for mining), either change "prevout" or change the current block time.
 
In general, a wallet will contain multiple UTXOs. The balance of the wallet is actually the sum of all available UTXOs in the current wallet. This is also applicable in PoS wallets and is even more important because arbitrary output may be used for staking. One of these outputs will be the prevout in the staking transaction, which will be used to generate a valid block.
 
In addition, there is one more important change in the PoS block mining process (compared to PoW), which is that the difficulty of mining is inversely proportional to the number of coins owned (rather than the number of UTXOs). For example, a wallet with 2 coins is only half the difficulty of mining. If it is not designed this way, users will be encouraged to generate many UTXOs with small micro-regulations, which will cause the block size to become larger and may cause some security problems.
 
The calculation of kernal hash can be expressed in pseudo-code as:
While(true){ Foreach(utxo in wallet){ blockTime = currentTime - currentTime % 16 posDifficulty = difficulty * utxo.value Hash = hash(previousStakeModifier << utxo.time << utxo.hash << utxo.n << blockTime) If(hash < posDifficulty){ Done } } Wait 16s -- wait 16 seconds, until the block time can be changed } 
 
Through the above process, we find that one of the UTXOs can be used to generate a staking transaction. This staking transaction has 1 vin, the UTXO we found. At the same time this staking transaction has at least two vouts, the first one is empty, which is used to identify the blockchain, the second vout is an OP_RETURN transaction containing only one public key, or contains the pay-to-pub-key script. The role of the latter is relatively pure (payment), and data transactions can have more uses (such as an independent block signature machine) without destroying the original UTXO model.
 
Finally, all transactions in the mempool will be added to the block. What we need to do next is generate the signature. This signature must use the public key corresponding to the second vout of the staking transaction. The actual transaction data is calculated by block hash. After signing, we can broadcast this block to the network. Other nodes in the network will verify the block. If the block is valid, the node will accept the block and connect it to its own blockchain while broadcasting the new block to other nodes it connects to.
 
Through the above steps, we can get a complete and secure PoS3.0 blockchain. PoS3.0 is considered to be the best consensus mechanism against malicious attacks in a fully decentralized consensus system. Why is this conclusion? We can understand the history of PoS development.
 
The development of PoS
PoS has a long history. Here is a brief description:
 
PoS1.0 — Applied in Peercoin , heavily dependent on coin age (ie, the time elapsed since UTXO was spent), the higher the coin age, the lower the difficulty of mining. This has the side effect that the user will choose to open a wallet for a long period of time (for example, one month or longer), so that the UTXO of the wallet will have a relatively large currency and the user can quickly find a new block. This will lead to double-spend attacks more easily. Peercoin itself is not affected by this, because it uses PoW and PoS mixing mechanisms, and PoW can reduce this negative effect.
 
PoS2.0 — The coin age was removed from the consensus mechanism and a different stake modifier was used than PoS1.0. The contents of the amendments are relatively numerous, but basically they are all about how to remove the coin age and realize the security consensus mechanism without using the PoW/PoS hybrid mode.
 
PoS3.0 — PoS3.0 can actually be said to be an upgraded version of PoS2.0. In PoS2.0, the snapshot modifier also contains the block time of the previous block, which was removed in 3.0, mainly to prevent the so-called "short-range" attack, that is, it is possible to change the previous area by traversing. Block time to traverse mining. PoS2.0 uses block time and transaction time to determine the age of UTXO, which is slightly different from the previous coinage age. It indicates that a UTXO can be used for the minimum number of confirmations required by staking. The UTXO age in PoS 3.0 becomes simpler, it is determined by the height of the block. This avoids the introduction of a less accurate timestamp in the blockchain and can effectively immunize the "timewarp" attack. PoS3.0 also adds OP_RETURN support for staking transactions, making voutYou can include only the public key, not necessarily the full pay-to-pubkey script.
 
Original:https://mp.weixin.qq.com/s/BRPuRn7iOoqeWbMiqXI11g
submitted by thisthingismud to Qtum [link] [comments]

The missing explanation of Proof of Stake Version 3 - Article by earlz.net

The missing explanation of Proof of Stake Version 3

In every cryptocurrency there must be some consensus mechanism which keeps the entire distributed network in sync. When Bitcoin first came out, it introduced the Proof of Work (PoW) system. PoW is done by cryptographically hashing a piece of data (the block header) over and over. Because of how one-way hashing works. One tiny change in the data can cause an extremely different hash to come of it. Participants in the network determine if the PoW is valid complete by judging if the final hash meets a certain condition, called difficulty. The difficulty is an ever changing "target" which the hash must meet or exceed. Whenever the network is creating more blocks than scheduled, this target is changed automatically by the network so that the target becomes more and more difficult to meet. And thus, requires more and more computing power to find a hash that matches the target within the target time of 10 minutes.

Definitions

Some basic definitions might be unfamiliar to some people not familiar with the blockchain code, these are:

Proof of Work and Blockchain Consensus Systems

Proof of Work is a proven consensus mechanism that has made Bitcoin secure and trustworthy for 8 years now. However, it is not without it's fair share of problems. PoW's major drawbacks are:
  1. PoW wastes a lot of electricity, harming the environment.
  2. PoW benefits greatly from economies of scale, so it tends to benefit big players the most, rather than small participants in the network.
  3. PoW provides no incentive to use or keep the tokens.
  4. PoW has some centralization risks, because it tends to encourage miners to participate in the biggest mining pool (a group of miners who share the block reward), thus the biggest mining pool operator holds a lot of control over the network.
Proof of Stake was invented to solve many of these problems by allowing participants to create and mine new blocks (and thus also get a block reward), simply by holding onto coins in their wallet and allowing their wallet to do automatic "staking". Proof Of Stake was originally invented by Sunny King and implemented in Peercoin. It has since been improved and adapted by many other people. This includes "Proof of Stake Version 2" by Pavel Vasin, "Proof of Stake Velocity" by Larry Ren, and most recently CASPER by Vlad Zamfir, as well as countless other experiments and lesser known projects.
For Qtum we have decided to build upon "Proof of Stake Version 3", an improvement over version 2 that was also made by Pavel Vasin and implemented in the Blackcoin project. This version of PoS as implemented in Blackcoin is what we will be describing here. Some minor details of it has been modified in Qtum, but the core consensus model is identical.
For many community members and developers alike, proof of stake is a difficult topic, because there has been very little written on how it manages to accomplish keeping the network safe using only proof of ownership of tokens on the network. This blog post will go into fine detail about Proof of Stake Version 3 and how it's blocks are created, validated, and ultimately how a pure Proof of Stake blockchain is possible to secure. This will assume some technical knowledge, but I will try to explain things so that most of the knowledge can be gathered from context. You should at least be familiar with the concept of the UTXO-based blockchain.
Before we talk about PoS, it helps to understand how the much simpler PoW consensus mechanism works. It's mining process can be described in just a few lines of pseudo-code:
while(blockhash > difficulty) { block.nonce = block.nonce + 1 blockhash = sha256(sha256(block)) } 
A hash is a cryptographic algorithm which takes an arbritrary amount of input data, does a lot of processing of it, and outputs a fixed-size "digest" of that data. It is impossible to figure out the input data with just the digest. So, PoW tends to function like a lottery, where you find out if you won by creating the hash and checking it against the target, and you create another ticket by changing some piece of data in the block. In Bitcoin's case, nonce is used for this, as well as some other fields (usually called "extraNonce"). Once a blockhash is found which is less than the difficulty target, the block is valid, and can be broadcast to the rest of the distributed network. Miners will then see it and start building the next block on top of this block.

Proof of Stake's Protocol Structures and Rules

Now enter Proof of Stake. We have these goals for PoS:
  1. Impossible to counterfeit a block
  2. Big players do not get disproportionally bigger rewards
  3. More computing power is not useful for creating blocks
  4. No one member of the network can control the entire blockchain
The core concept of PoS is very similar to PoW, a lottery. However, the big difference is that it is not possible to "get more tickets" to the lottery by simply changing some data in the block. Instead of the "block hash" being the lottery ticket to judge against a target, PoS invents the notion of a "kernel hash".
The kernel hash is composed of several pieces of data that are not readily modifiable in the current block. And so, because the miners do not have an easy way to modify the kernel hash, they can not simply iterate through a large amount of hashes like in PoW.
Proof of Stake blocks add many additional consensus rules in order to realize it's goals. First, unlike in PoW, the coinbase transaction (the first transaction in the block) must be empty and reward 0 tokens. Instead, to reward stakers, there is a special "stake transaction" which must be the 2nd transaction in the block. A stake transaction is defined as any transaction that:
  1. Has at least 1 valid vin
  2. It's first vout must be an empty script
  3. It's second vout must not be empty
Furthermore, staking transactions must abide by these rules to be valid in a block:
  1. The second vout must be either a pubkey (not pubkeyhash!) script, or an OP_RETURN script that is unspendable (data-only) but stores data for a public key
  2. The timestamp in the transaction must be equal to the block timestamp
  3. the total output value of a stake transaction must be less than or equal to the total inputs plus the PoS block reward plus the block's total transaction fees. output <= (input + block_reward + tx_fees)
  4. The first spent vin's output must be confirmed by at least 500 blocks (in otherwords, the coins being spent must be at least 500 blocks old)
  5. Though more vins can used and spent in a staking transaction, the first vin is the only one used for consensus parameters.
These rules ensure that the stake transaction is easy to identify, and ensures that it gives enough info to the blockchain to validate the block. The empty vout method is not the only way staking transactions could have been identified, but this was the original design from Sunny King and has worked well enough.
Now that we understand what a staking transaction is, and what rules they must abide by, the next piece is to cover the rules for PoS blocks:
There are a lot of details here that we'll cover in a bit. The most important part that really makes PoS effective lies in the "kernel hash". The kernel hash is used similar to PoW (if hash meets difficulty, then block is valid). However, the kernel hash is not directly modifiable in the context of the current block. We will first cover exactly what goes into these structures and mechanisms, and later explain why this design is exactly this way, and what unexpected consequences can come from minor changes to it.

The Proof of Stake Kernel Hash

The kernel hash specifically consists of the following exact pieces of data (in order):
The stake modifier of a block is a hash of exactly:
The only way to change the current kernel hash (in order to mine a block), is thus to either change your "prevout", or to change the current block time.
A single wallet typically contains many UTXOs. The balance of the wallet is basically the total amount of all the UTXOs that can be spent by the wallet. This is of course the same in a PoS wallet. This is important though, because any output can be used for staking. One of these outputs are what can become the "prevout" in a staking transaction to form a valid PoS block.
Finally, there is one more aspect that is changed in the mining process of a PoS block. The difficulty is weighted against the number of coins in the staking transaction. The PoS difficulty ends up being twice as easy to achieve when staking 2 coins, compared to staking just 1 coin. If this were not the case, then it would encourage creating many tiny UTXOs for staking, which would bloat the size of the blockchain and ultimately cause the entire network to require more resources to maintain, as well as potentially compromise the blockchain's overall security.
So, if we were to show some pseudo-code for finding a valid kernel hash now, it would look like:
while(true){ foreach(utxo in wallet){ blockTime = currentTime - currentTime % 16 posDifficulty = difficulty * utxo.value hash = hash(previousStakeModifier << utxo.time << utxo.hash << utxo.n << blockTime) if(hash < posDifficulty){ done } } wait 16s -- wait 16 seconds, until the block time can be changed } 
This code isn't so easy to understand as our PoW example, so I'll attempt to explain it in plain english:
Do the following over and over for infinity: Calculate the blockTime to be the current time minus itself modulus 16 (modulus is like dividing by 16, but then only instead of taking the result, taking the remainder) Calculate the posDifficulty as the network difficulty, multiplied by the number of coins held by the UTXO. Cycle through each UTXO in the wallet. With each UTXO, calculate a SHA256 hash using the previous block's stake modifier, as well as some data from the the UTXO, and finally the blockTime. Compare this hash to the posDifficulty. If the hash is less than the posDifficulty, then the kernel hash is valid and you can create a new block. After going through all UTXOs, if no hash produced is less than the posDifficulty, then wait 16 seconds and do it all over again. 
Now that we have found a valid kernel hash using one of the UTXOs we can spend, we can create a staking transaction. This staking transaction will have 1 vin, which spends the UTXO we found that has a valid kernel hash. It will have (at least) 2 vouts. The first vout will be empty, identifying to the blockchain that it is a staking transaction. The second vout will either contain an OP_RETURN data transaction that contains a single public key, or it will contain a pay-to-pubkey script. The latter is usually used for simplicity, but using a data transaction for this allows for some advanced use cases (such as a separate block signing machine) without needlessly cluttering the UTXO set.
Finally, any transactions from the mempool are added to the block. The only thing left to do now is to create a signature, proving that we have approved the otherwise valid PoS block. The signature must use the public key that is encoded (either as pay-pubkey script, or as a data OP_RETURN script) in the second vout of the staking transaction. The actual data signed in the block hash. After the signature is applied, the block can be broadcast to the network. Nodes in the network will then validate the block and if it finds it valid and there is no better blockchain then it will accept it into it's own blockchain and broadcast the block to all the nodes it has connection to.
Now we have a fully functional and secure PoSv3 blockchain. PoSv3 is what we determined to be most resistant to attack while maintaining a pure decentralized consensus system (ie, without master nodes or currators). To understand why we approached this conclusion however, we must understand it's history.

PoSv3's History

Proof of Stake has a fairly long history. I won't cover every detail, but cover broadly what was changed between each version to arrive at PoSv3 for historical purposes:
PoSv1 - This version is implemented in Peercoin. It relied heavily on the notion of "coin age", or how long a UTXO has not been spent on the blockchain. It's implementation would basically make it so that the higher the coin age, the more the difficulty is reduced. This had the bad side-effect however of encouraging people to only open their wallet every month or longer for staking. Assuming the coins were all relatively old, they would almost instantaneously produce new staking blocks. This however makes double-spend attacks extremely easy to execute. Peercoin itself is not affected by this because it is a hybrid PoW and PoS blockchain, so the PoW blocks mitigated this effect.
PoSv2 - This version removes coin age completely from consensus, as well as using a completely different stake modifier mechanism from v1. The number of changes are too numerous to list here. All of this was done to remove coin age from consensus and make it a safe consensus mechanism without requiring a PoW/PoS hybrid blockchain to mitigate various attacks.
PoSv3 - PoSv3 is really more of an incremental improvement over PoSv2. In PoSv2 the stake modifier also included the previous block time. This was removed to prevent a "short-range" attack where it was possible to iteratively mine an alternative blockchain by iterating through previous block times. PoSv2 used block and transaction times to determine the age of a UTXO; this is not the same as coin age, but rather is the "minimum confirmations required" before a UTXO can be used for staking. This was changed to a much simpler mechanism where the age of a UTXO is determined by it's depth in the blockchain. This thus doesn't incentivize inaccurate timestamps to be used on the blockchain, and is also more immune to "timewarp" attacks. PoSv3 also added support for OP_RETURN coinstake transactions which allows for a vout to contain the public key for signing the block without requiring a full pay-to-pubkey script.

References:

  1. https://peercoin.net/assets/papepeercoin-paper.pdf
  2. https://blackcoin.co/blackcoin-pos-protocol-v2-whitepaper.pdf
  3. https://www.reddcoin.com/papers/PoSV.pdf
  4. https://blog.ethereum.org/2015/08/01/introducing-casper-friendly-ghost/
  5. https://github.com/JohnDolittle/blackcoin-old/blob/mastesrc/kernel.h#L11
  6. https://github.com/JohnDolittle/blackcoin-old/blob/mastesrc/main.cpp#L2032
  7. https://github.com/JohnDolittle/blackcoin-old/blob/mastesrc/main.h#L279
  8. http://earlz.net/view/2017/07/27/1820/what-is-a-utxo-and-how-does-it
  9. https://en.bitcoin.it/wiki/Script#Obsolete_pay-to-pubkey_transaction
  10. https://en.bitcoin.it/wiki/Script#Standard_Transaction_to_Bitcoin_address_.28pay-to-pubkey-hash.29
  11. https://en.bitcoin.it/wiki/Script#Provably_Unspendable.2FPrunable_Outputs
Article by earlz.net
http://earlz.net/view/2017/07/27/1904/the-missing-explanation-of-proof-of-stake-version
submitted by B3TeC to Moin [link] [comments]

The Strange Birth & History of Monero, Part IV: Monero "as it is now"

You can read here part III.
You can read this whole story translated into Spanish here
This is part IV, the last but not least.
Monero - A secure, private, untreceable cryptocurrency
https://bitcointalk.org/index.php?topic=583449.0
Notable comments in this thread:
-201: “I would like to offer 1000 MRO to the first person who creates a pool”
(https://bitcointalk.org/index.php?topic=583449.msg6422665#msg6422665)
[tacotime offers bounty to potential pool developer. Bytecoin devs haven’t released any code for pools, and the only existent pool, minergate (in the future related to BCN interests) was closed source]
-256: “Adam back seems to like CryptoNote the better than Zerocash https://twitter.com/adam3us/status/453493394472697856”
(https://bitcointalk.org/index.php?topic=583449.msg6440769#msg6440769)
-264: “update on pools: The NOMP guy (zone117x) is looking to fork his open source software and get a pool going, so one should hopefully be up soon.”
(https://bitcointalk.org/index.php?topic=583449.msg6441302#msg6441302)
-273: “Update on GUI: othe from VertCoin has notified me that he is working on it.”
(https://bitcointalk.org/index.php?topic=583449.msg6442606#msg6442606)
-356: “Everyone wanting a pool, please help raise a bounty with me here:
https://bitcointalk.org/index.php?topic=589533.0
And for the GUI:
https://bitcointalk.org/index.php?topic=589561.0”
(https://bitcointalk.org/index.php?topic=583449.msg6461533#msg6461533)
[5439 MRO + 0.685 BTC + 5728555.555 BCN raised for pool and 1652 XMR, 121345.46695471 BCN for the GUI wallet. Though this wallet was "rejected" as official GUI because wallet still has to be polished before building a GUI]
-437: “Yes, most Windows users should see a higher hashrate with the new build. You can thank NoodleDoodle. ”
(https://bitcointalk.org/index.php?topic=583449.msg6481202#msg6481202)
-446: “Even faster Windows binaries have just been uploaded. Install for more hash power! Once again, it was NoodleDoodle.”
(https://bitcointalk.org/index.php?topic=583449.msg6483680#msg6483680)
-448: “that almost doubled my hashrate again! GREAT STUFF !!!”
(https://bitcointalk.org/index.php?topic=583449.msg6484109#msg6484109)
-461: “Noodle only started optimization today so there may be gains for your CPU in the future.”
(https://bitcointalk.org/index.php?topic=583449.msg6485247#msg6485247)
[First day of miner optimization by NoodleDoodle, it is only May 1st]
-706: “The unstoppable NoodleDoodle has optimized the Windows build again. Hashrate should more than double. Windows is now faster than Linux. :O”
(https://bitcointalk.org/index.php?topic=583449.msg6549444#msg6549444)
-753: “i here tft is no longer part of the project. so is he forking or relaunching bytecoin under new name and new parameters (merged mining with flatter emission curve.) also. what is the end consensus for the emission curve for monero. will it be adjusted."
(https://bitcointalk.org/index.php?topic=583449.msg6561345#msg6561345)
[May, 5th 2014. TFT is launching FANTOMCOIN, a clone coin which its "only" feature was merged mining]
-761: (https://bitcointalk.org/index.php?topic=583449.msg6561941#msg6561941) [May, 5th 2014 – eizh on emission curve and tail emission]
-791: “As promised, I did Russian translation of main topic.”
(https://bitcointalk.org/index.php?topic=583449.msg6565521#msg6565521)
[one among dozens of decentralized and “altruist” collaborators of Monero in minor tasks]
-827: image
(https://bitcointalk.org/index.php?topic=583449.msg6571652#msg6571652)
-853: (https://bitcointalk.org/index.php?topic=583449.msg6575033#msg6575033)
[some are not happy that NoodleDoodle had only released the built binaries, but not the source code]
-950: (https://bitcointalk.org/index.php?topic=583449.msg6593768#msg6593768)
[Rias, an account suspected to be related to the Bytecoin scam, dares to tag Monero as “instamine”]
-957: “It's rather bizarre that you're calling this an "instamine" scam when you're so fervently supporting BCN, which was mined 80% before entering the clearnet. Difficulty adjustments are per block, so there is no possibility of an instamine unless you don't publish your blockchain (emission is regular at the preset interval, and scales adequately with the network hash rate). What you're accusing monero of is exactly what ByteCoin did.”
https://bitcointalk.org/index.php?topic=583449.msg6594025#msg6594025
[Discussion with rias drags on for SEVERAL posts]
-1016: “There is no "dev team". There is a community of people working on various aspects of the coin.
I've been keeping the repo up to date. NoodleDoodle likes to optimise his miner. TFT started the fork and also assists when things break. othe's been working on a GUI. zone117x has been working on a pool.
It's a decentralized effort to maintain the fork, not a strawman team of leet hackers who dwell in the underbellies of the internet and conspire for instamines.”
(https://bitcointalk.org/index.php?topic=583449.msg6596828#msg6596828)
-1023: “Like I stated in IRC, I am not part of the "dev team", I never was. Just so happens I took a look at the code and changed some extremely easy to spot "errors". I then decided to release the binary because I thought MRO would benefit from it. I made this decision individually and nobody else should be culpable”
(https://bitcointalk.org/index.php?topic=583449.msg6597057#msg6597057)
[Noodledoodle gets rid of the instaminer accusations]
-1029: “I decided to relaunch Monero so it will suit all your wishes that you had: flatter emission curve, open source optimized miner for everybody from the start, no MM with BCN/BMR and the name. New Monero will be ready tomorrow”
(https://bitcointalk.org/index.php?topic=583449.msg6597252#msg6597252)
[people trying to capitalize mistakes is always there.]
-1030: "Pull request has been submitted and merged to update miner speed
It appears from the simplicity of the fix that there may have been deliberate crippling of the hashing algorithm from introduction with ByteCoin."
https://bitcointalk.org/index.php?topic=583449.msg6597460#msg6597460
[tacotime “officially” raises suspects of possible voluntarily crippled miner]
-1053: "I don't mind the 'relaunch' or the merge-mining fork or any other new coin at all. It's inevitable that the CryptoNote progresses like scrypt into a giant mess of coins. It's not undesirable or 'wrong'. Clones fighting out among themselves is actually beneficial for Monero. Although one of them is clearly unserious and trolling by choosing the same name.
Anyway, this sudden solidarity with BCN or TFT sure is strange when none of these accounts were around for the discussions that took place 3 weeks ago. Such vested interests with no prior indications. Hmm...? "
https://bitcointalk.org/index.php?topic=583449.msg6599013#msg6599013
[eizh points out the apparent organized fudding]
-1061: "There was no takeover. The original developer (who himself did a fork of bytecoin and around a dozen lines of code changes) was non-responsive and had disappeared. The original name had been cybersquatted all over the place (since the original developer did not even register any domain name much less create a web site), making it impossible to even create a suitably named web site. A bunch of us who didn't want to see the coin die who represented a huge share of the hash power and ownership of the coin decided to adopt it. We reached out to the original developer to participate in this community effort and he still didn't respond over 24 hours, so we decided to act to save the coin from neglect and actively work toward building the coin."
(https://bitcointalk.org/index.php?topic=583449.msg6599798#msg6599798)
[smooth defends legitimacy of current “dev team” and decisions taken]
-1074: “Zerocash will be announced soon (May 18 in Oakland? but open source may not be ready then?).
Here is a synopsis of the tradeoffs compared to CyptoNote: […]"
(https://bitcointalk.org/index.php?topic=583449.msg6602891#msg6602891)
[comparison among Zerocash y Cryptonote]
-1083: "Altcoin history shows that except in the case of premine (Tenebrix), the first implementation stays the largest by a wide margin. We're repeating that here by outpacing Bytecoin (thanks to its 80% mine prior to surfacing). No other CN coin has anywhere near the hashrate or trading volume. Go check diff in Fantom for example or the lack of activity in BCN trading.
The only CN coin out there doing something valuable is HoneyPenny, and they're open source too. If HP develops something useful, MRO can incorporate it as well. Open source gives confidence. No need for any further edge."
(https://bitcointalk.org/index.php?topic=583449.msg6603452#msg6603452)
[eizh reminds everyone the “first mover” advantage is a real advantage]
-1132: "I decided to tidy up bitmonero GitHub rep tonight, so now there is all valuable things from latest BCN commits & Win32. Faster hash from quazarcoin is also there. So BMR rep is the freshest one.
I'm working on another good feature now, so stay tuned."
(https://bitcointalk.org/index.php?topic=583449.msg6619738#msg6619738)
[first TFT apparition in weeks, he somehow pretends to still be the "lead dev"]
-1139: "This is not the github or website used by Monero. This github is outdated even with these updates. Only trust binaries from the first post."
(https://bitcointalk.org/index.php?topic=583449.msg6619971#msg6619971)
[eizh tries to clarify the community, after tft interference, which are the official downloads]
-1140: “The faster hash is from NoodleDoodle and is already submitted to the moner-project github (https://github.com/monero-project/bitmonero) and included in the binaries here.
[trying to bring TFT back on board] It would be all easier if you just work together with the other guys, whats the problem? Come to irc and talk like everyone else?
[on future monero exchangers] I got confirmation from one."
(https://bitcointalk.org/index.php?topic=583449.msg6619997#msg6619997)
[8th may 2014, othe announces NoodleDoodle optimized miner is now open source, asks TFT to collaborate and communicates an exchanger is coming]
-1146: "I'll be impressed if they [BCN/TFT shills] manage to come up with an account registered before January, but then again they could buy those.”
(https://bitcointalk.org/index.php?topic=583449.msg6620257#msg6620257)
[smooth]
-1150: “Ring signatures mean that when you sign a transaction to spend an output (coins), no one looking at the block chain can tell whether you signed it or one of the other outputs you choose to mix in with yours. With a mixing factor of 5 or 10 after several transactions there are millions of possible coins all mixed together. You get "anonymity" and mixing without having to use a third party mixer.”
(https://bitcointalk.org/index.php?topic=583449.msg6620433#msg6620433)
[smooth answering to “what are ring signatures” in layman terms]
-1170: "Someone (C++ skilled) did private optimized miner a few days ago, he got 74H/s for i5 haswell. He pointed that mining code was very un-optimized and he did essential improvements for yourself. So, high H/S is possible yet. Can the dev's core review code for that?"
(https://bitcointalk.org/index.php?topic=583449.msg6623136#msg6623136)
[forums are talking about an individual or group of individuals with optimized miners - may 9th 2014]
-1230: "Good progress on the pool reported by NOMP dev zone117x. Stay tuned, everyone.
And remember to email your favorite exchanges about adding MRO."
(https://bitcointalk.org/index.php?topic=583449.msg6640190#msg6640190)
-1258: "This is actually as confusing to us as you. At one point, thankful_for_today said he was okay with name change: https://bitcointalk.org/index.php?topic=563821.msg6368600#msg6368600
Then he disappeared for more than a week after the merge mining vote failed.”
(https://bitcointalk.org/index.php?topic=583449.msg6645981#msg6645981)
[eizh on the TFT-issue]
-1358: “Jadehorse: registered on 2014-03-06 and two pages of one line posts:
https://bitcointalk.org/index.php?action=profile;u=263597
https://bitcointalk.org/index.php?action=profile;u=263597;sa=showPosts
Trustnobody: registered on 2014-03-06 and two pages of one line posts:
https://bitcointalk.org/index.php?action=profile;u=264292
https://bitcointalk.org/index.php?action=profile;u=264292;sa=showPosts
You guys should really just stop trying. It is quite transparent what you are doing. Or if you want to do it, do it somewhere else. Everyone else: ignore them please."
(https://bitcointalk.org/index.php?topic=583449.msg6666844#msg6666844)
[FUD campaign still ongoing, smooth battles it]
-1387: "The world’s first exchange for Monero just opened! cryptonote.exchange.to"
(https://bitcointalk.org/index.php?topic=583449.msg6675902#msg6675902)
[David Latapie announces an important milestone: exchanger is here]
-1467: "image"
(https://bitcointalk.org/index.php?topic=583449.msg6686125#msg6686125)
[it is weird, but tft appears again, apparently as if he were in a parallel reality]
-1495: “http://monero.cc/blog/monero-price-0-002-passed/”
(https://bitcointalk.org/index.php?topic=583449.msg6691706#msg6691706)
[“trading” milestone reached: monero surpassed for first time 0.002 btc price]
-1513: "There is one and only one coin, formerly called Bitmonero, now called Monero. There was a community vote in favor (despite likely ballot stuffing against). All of the major stakeholders at the time agreed with the rename, including TFT.
The code base is still called bitmonero. There is no reason to rename it, though we certainly could have if we really wanted to.
TFT said he he is sentimental about the Bitmonero name, which I can understand, so I don't think there is any malice or harm in him continuing to use it. He just posted the nice hash rate chart on here using the old name. Obviously he understands that they are one and the same coin."
(https://bitcointalk.org/index.php?topic=583449.msg6693615#msg6693615)
[Smooth clears up again the relation with TFT and BMR. Every time he appears it seems to generate confusion on newbies]
-1543: "Pool software is in testing now. You can follow the progress on the pool bounty thread (see original post on this thread for link)."
(https://bitcointalk.org/index.php?topic=583449.msg6698097#msg6698097)
-1545: "[on the tail emission debate] I've been trying to raise awareness of this issue. The typical response seems to be, "when Bitcoin addresses the problem, so will we." To me this means it will never be addressed. The obvious solution is to perpetually increase the money supply, always rewarding miners with new coins.
Tacotime mentioned a hard fork proposal to never let the block reward drop below 1 coin:
Code: if (blockReward < 1){ blockReward = 1; }
I assume this is merely delaying the problem, however. I proposed a fixed annual debasement (say 2%) with a tx fee cap of like 0.001% of the current block reward (or whatever sounds reasonable). That way we still get the spam protection without worrying about fee escalation down the road."
(https://bitcointalk.org/index.php?topic=583449.msg6698879#msg6698879)
[Johnny Mnemonic wants to debate tail emission. Debate is moved to the “Monero Economy” thread]
-1603: “My GOD,the wallet is very very wierd and too complicated to operate, Why dont release a wallet-qt as Bitcoin?”
(https://bitcointalk.org/index.php?topic=583449.msg6707857#msg6707857)
[Newbies have hard times with monero]
-1605: "because this coin is not a bitcoin clone and so there isnt a wallet-qt to just copy and release. There is a bounty for a GUI wallet and there is already an experimental windows wallet..."
(https://bitcointalk.org/index.php?topic=583449.msg6708250#msg6708250)
-1611: "I like this about Monero, but it seems it was written by cryptographers, not programmers. The damned thing doesn't even compile on Arch, and there are several bugs, like command history not working on Linux. The crypto ideas are top-notch, but the implementation is not."
(https://bitcointalk.org/index.php?topic=583449.msg6709002#msg6709002)
[Wolf0, a miner developer, little by little joining the community]
-1888: "http://198.199.79.100 (aka moneropool.org) successfully submitted a block. Miners will be paid for their work once payments start working.
P.S. This is actually our second block today. The first was orphaned. :/"
(https://bitcointalk.org/index.php?topic=583449.msg6753836#msg6753836)
[May 16th: first pool block]
-1927: "Botnets aren't problem now. The main problem is a private hi-performance miner"
(https://bitcointalk.org/index.php?topic=583449.msg6759622#msg6759622)
-1927: "Evidence?"
(https://bitcointalk.org/index.php?topic=583449.msg6759661#msg6759661)
[smooth about the private optimized miner]
-1937: “[reference needed: smooth battling the weak evidence of optimized miner] Yes, I remember that. Some person on the Internet saying that some other unnamed person said he did something hardly constitutes evidence.
I'm not even doubting that optimized asm code could make a big difference. Just not sure how to know whether this is real or not. Rumors and FUD are rampant, so it is just hard to tell."
(https://bitcointalk.org/index.php?topic=583449.msg6760040#msg6760040)
[smooth does not take the "proof" seriously]
-1949: "image
One i5 and One e5 connected to local pool:
image"
(https://bitcointalk.org/index.php?topic=583449.msg6760624#msg6760624)
[proof of optimized miner]
-1953: "lazybear are you interested in a bounty to release the source code (maybe cleaned up a bit?) your optimized miner? If not, I'll probably play around with the code myself tomorrow and see if I can come up with something, or maybe Noodle Doodle will take an interest."
(https://bitcointalk.org/index.php?topic=583449.msg6760699#msg6760699)
[smooth tries to bring lazybear and his optimized miner on board]
-1957: "smooth, NoodleDoodle just said on IRC his latest optimizations are 4x faster on Windows. Untested on Linux so far but he'll push the source to the git repo soon. We'll be at 1 million network hashrate pretty soon."
(https://bitcointalk.org/index.php?topic=583449.msg6760814#msg6760814)
[eizh makes publics NoodleDoodle also has more miner optimizations ready]
-1985: “Someone (not me) created a Monero block explorer and announced it yesterday in a separate thread:
https://bitcointalk.org/index.php?topic=611561.0”
(https://bitcointalk.org/index.php?topic=583449.msg6766206#msg6766206)
[May 16th, 2014: a functional block explorer]
-2018: “Noodle is doing some final tests on Windows and will begin testing on Linux. He expects hashrate should increase across all architectures. I can confirm a 5x increase on an i7 quad-core + Windows 7 64-bit.
Please be patient. These are actual changes to the program, not just a switch that gets flicked on. It needs testing.”
(https://bitcointalk.org/index.php?topic=583449.msg6770093#msg6770093)
[eizh has more info on last miner optimization]
-2023: “Monero marketcap is around $300,000 as of now”
(https://bitcointalk.org/index.php?topic=583449.msg6770365#msg6770365)
-2059: I was skeptical of this conspiracy theory at first but after thinking about the numbers and looking back at the code again, I'm starting to believe it.
These are not deep optimizations, just cleaning up the code to work as intended.
At 100 H/s, with 500k iterations, 70 cycles per L3 memory access, we're now at 3.5 GHz which is reasonably close. So the algorithm is finally memory-bound, as it was originally intended to be. But as delivered by the bytecode developers not even close.
I know this is going to sound like tooting our own horn but this is another example of the kind of dirty tricks you can expect from the 80% premine crowd and the good work being done in the name of the community by the Monero developers.
Assuming they had the reasonable, and not deoptimized, implementation of the algorithm as designed all along (which is likely), the alleged "two year history" of bytecoin was mined on 4-8 PCs. It's really one of the shadiest and sleaziest premines scams yet, though this shouldn't be surprising because in every type of scam, the scams always get sneakier and more deceptive over time (the simple ones no longer work)."
(https://bitcointalk.org/index.php?topic=583449.msg6773168#msg6773168)
[smooth blowing the lid: if miner was so de-optimized, then BCN adoption was even lower than initially thought]
-2123: (https://bitcointalk.org/index.php?topic=583449.msg6781481#msg6781481)
[fluffypony first public post in Monero threads]
-2131: "moneropool.org is up to 2KHs, (average of 26Hs per user). But that's still only 0.3% of the reported network rate of 575Khs.
So either a large botnet is mining, or someone's sitting quietly on a much more efficient miner and raking in MRO."
(https://bitcointalk.org/index.php?topic=583449.msg6782192#msg6782192)
[with pools users start to notice that “avg” users account for a very small % of the network hashrate, either botnets or a super-optimized miner is mining monero]
-2137: “I figure its either:
(https://bitcointalk.org/index.php?topic=583449.msg6782852#msg6782852)
-2192: “New source (0.8.8.1) is up with optimizations in the hashing. Hashrate should go up ~4x or so, but may have CPU architecture dependence. Windows binaries are up as well for both 64-bit and 32-bit."
(https://bitcointalk.org/index.php?topic=583449.msg6788812#msg6788812)
[eizh makes official announce of last miner optimization, it is may 17th]
-2219: (https://bitcointalk.org/index.php?topic=583449.msg6792038#msg6792038)
[wolf0 is part of the monero community for a while, discussing several topics as botnet mining and miner optimizations. Now spots security flaws in the just launched pools]
-2301: "5x optimized miner released, network hashrate decreases by 10% Make your own conclusions. :|"
(https://bitcointalk.org/index.php?topic=583449.msg6806946#msg6806946)
-2323: "Monero is on Poloniex https://poloniex.com/exchange/btc_mro"
(https://bitcointalk.org/index.php?topic=583449.msg6808548#msg6808548)
-2747: "Monero is holding a $500 logo contest on 99designs.com now: https://99designs.com/logo-design/contests/monero-mro-cryptocurrency-logo-design-contest-382486"
(https://bitcointalk.org/index.php?topic=583449.msg6829109#msg6829109)
-2756: “So... ALL Pools have 50KH/s COMBINED.
Yet, network hash is 20x more. Am i the only one who thinks that some people are insta mining with prepared faster miners?”
(https://bitcointalk.org/index.php?topic=583449.msg6829977#msg6829977)
-2757: “Pools aren't stable yet. They are more inefficient than solo mining at the moment. They were just released. 10x optimizations have already been released since launch, I doubt there is much more optimization left.”
(https://bitcointalk.org/index.php?topic=583449.msg6830012#msg6830012)
-2765: “Penalty for too large block size is disastrous in the long run.
Once MRO value increases a lot, block penalties will become more critical of an issue. Pools will fix this issue by placing a limit on number and size of transactions. Transaction fees will go up, because the pools will naturally accept the most profitable transactions. It will become very expensive to send with more than 0 mixin. Anonymity benefits of ring signatures are lost, and the currency becomes unusable for normal transactions.”
(https://bitcointalk.org/index.php?topic=583449.msg6830475#msg6830475)
-2773: "The CryptoNote developers didn't want blocks getting very large without genuine need for it because it permits a malicious attack. So miners out of self-interest would deliberately restrict the size, forcing the network to operate at the edge of the penalty-free size limit but not exceed it. The maximum block size is a moving average so over time it would grow to accommodate organic volume increase and the issue goes away. This system is most broken when volume suddenly spikes."
(https://bitcointalk.org/index.php?topic=583449.msg6830710#msg6830710)
-3035: "We've contributed a massive amount to the infrastructure of the coin so far, enough to get recognition from cryptonote, including optimizing their hashing algorithm by an order of magnitude, creating open source pool software, and pushing several commits correcting issues with the coin that eventually were merged into the ByteCoin master. We also assisted some exchange operators in helping to support the coin.
To say that has no value is a bit silly... We've been working alongside the ByteCoin devs to improve both coins substantially."
(https://bitcointalk.org/index.php?topic=583449.msg6845545#msg6845545)
[tacotime defends the Monero team and community of accusations of just “ripping-off” others hard-work and “steal” their project]
-3044: "image"
(https://bitcointalk.org/index.php?topic=583449.msg6845986#msg6845986)
[Monero added to coinmarketcap may 21st 2014]
-3059: "You have no idea how influential you have been to the success of this coin. You are a great ambassador for MRO and one of the reasons why I chose to mine MRO during the early days (and I still do, but alas no soup for about 5 days now)."
(https://bitcointalk.org/index.php?topic=583449.msg6846509#msg6846509)
[random user thanks smooth CONSTANT presence, and collaboration. It is not all FUD ;)]
-3068: "You are a little too caught up in the mindset of altcoin marketing wars about "unique features" and "the team" behind the latest pump and dump scam.
In fact this coin is really little more than BCN without the premine. "The team" is anyone who contributes code, which includes anyone contributing code to the BCN repository, because that will get merged as well (and vice-versa).
Focus on the technology (by all accounts amazing) and the fact that it was launched in a clean way without 80% of the total world supply of the coin getting hidden away "somewhere." That is the unique proposition here. There also happens to be a very good team behind the coin, but anyone trying too hard to market on the basis of some "special" features, team, or developer is selling you something. Hold on to your wallet."
(https://bitcointalk.org/index.php?topic=583449.msg6846638#msg6846638)
[An answer to those trolls saying Monero has no innovation/unique feature]
-3070: "Personally I found it refreshing that Monero took off WITHOUT a logo or a gui wallet, it means the team wasn't hyping a slick marketing package and is concentrating on the coin/note itself."
(https://bitcointalk.org/index.php?topic=583449.msg6846676#msg6846676)
-3119: “image
[included for the lulz]
-3101: "[…]The main developers are tacotime, smooth, NoodleDoodle. Some needs are being contracted out, including zone117x, LucasJones, and archit for the pool, another person for a Qt GUI, and another person independently looking at the code for bugs."
(https://bitcointalk.org/index.php?topic=583449.msg6848006#msg6848006)
[the initial "core team" so far, eizh post]
-3123: (https://bitcointalk.org/index.php?topic=583449.msg6850085#msg6850085)
[fluffy steps-in with an interesting dense post. Don’t dare to skip it, worthwhile reading]
-3127: (https://bitcointalk.org/index.php?topic=583449.msg6850526#msg6850526)
[fluffy again, worth to read it too, so follow link, don’t be lazy]
-3194: "Hi guys - thanks to lots of hard work we have added AES-NI support to the slow_hash function. If you're using an AES-NI processor you should see a speed-up of about 30%.”
(https://bitcointalk.org/index.php?topic=583449.msg6857197#msg6857197)
[flufflypony is now pretty active in the xmr topic and announces a new optimization to the crippled miner]
-3202: "Whether using pools or not, this coin has a lot of orphaned blocks. When the original fork was done, several of us advised against 60 second blocks, but the warnings were not heeded.
I'm hopeful we can eventually make a change to more sane 2- or 2.5-minute blocks which should drastically reduce orphans, but that will require a hard fork, so not that easy."
(https://bitcointalk.org/index.php?topic=583449.msg6857796#msg6857796)
[smooth takes the opportunity to remember the need of bigger target block]
-3227: “Okay, optimized miner seems to be working: https://bitcointalk.org/index.php?topic=619373”
[wolf0 makes public his open source optimized miner]
-3235: "Smooth, I agree block time needs to go back to 2 minutes or higher. I think this and other changes discussed (https://bitcointalk.org/index.php?topic=597878.msg6701490#msg6701490) should be rolled into a single hard fork and bundled with a beautiful GUI wallet and mining tools."
(https://bitcointalk.org/index.php?topic=583449.msg6861193#msg6861193)
[tail emission, block target and block size are discussed in the next few messages among smooth, johnny and others. If you want to know further about their opinions/reasonings go and read it]
-3268: (https://bitcointalk.org/index.php?topic=583449.msg6862693#msg6862693)
[fluffy dares another user to bet 5 btc that in one year monero will be over dash in market cap. A bet that he would have lost as you can see here https://coinmarketcap.com/historical/20150524/ even excluding the 2M “instamined” coins]
-3283: "Most of the previous "CPU only" coins are really scams and the developers already have GPU miner or know how to write one. There are a very few exceptions, almost certainly including this one.
I don't expect a really dominant GPU miner any time soon, maybe ever. GPUs are just computers though, so it is certainly possible to mine this on a GPU, and there probably will be a some GPU miner, but won't be so much faster as to put small scale CPU miners out of business (probably -- absent some unknown algorithmic flaw).
Everyone focuses on botnets because it has been so long since regular users were able to effectively mine a coin (due to every coin rapidly going high end GPU and ASIC) that the idea that "users" could vastly outnumber "miners" (botnet or otherwise) isn't even on the radar.
The vision here is a wallet that asks you when you want to install: "Do you want to devote some of you CPU power to help secure the network. You will be eligible to receive free coins as a reward (recommended) [check box]." Get millions of users doing that and it will drive down the value of mining to where neither botnets nor professional/industrial miners will bother, and Satoshi's original vision of a true p2p currency will be realized.
That's what cryptonote wants to accomplish with this whole "egalitarian mining" concept. Whether it succeeds I don't know but we should give it a chance. Those cryptonote guys seem pretty smart. They've probably thought this through better than any of us have."
(https://bitcointalk.org/index.php?topic=583449.msg6863720#msg6863720)
[smooth vision of a true p2p currency]
-3318: "I have a screen shot that was PMed to me by someone who paid a lot of money for a lot of servers to mine this coin. He won't be outed by me ever but he does in fact exist. Truth."
(https://bitcointalk.org/index.php?topic=583449.msg6865061#msg6865061)
[smooth somehow implies it is not botnets but an individual or a group of them renting huge cloud instances]
-3442: "I'm happy to report we've successfully cracked Darkcoin's network with our new quantum computers that just arrived from BFL, a mere two weeks after we ordered them."
[fluffy-troll]
-3481: “Their slogan is, "Orphaned Blocks, Bloated Blockchain, that's how we do""
(https://bitcointalk.org/index.php?topic=583449.msg6878244#msg6878244)
[Major FUD troll in the topic. One of the hardest I’ve ever seen]
-3571: "Tacotime wanted the thread name and OP to use the word privacy instead of anonymity, but I made the change for marketing reasons. Other coins do use the word anonymous improperly, so we too have to play the marketing game. Most users will not bother looking at details to see which actually has more privacy; they'll assume anonymity > privacy. In a world with finite population, there's no such thing as anonymity. You're always "1 of N" possible participants.
Zero knowledge gives N -> everyone using the currency, ring signatures give N -> your choice, and CoinJoin gives N -> people who happen to be spending around the same amount of money as you at around the same time. This is actually the critical weakness of CoinJoin: the anonymity set is small and it's fairly susceptible to blockchain analysis. Its main advantage is that you can stick to Bitcoin without hard forking.
Another calculated marketing decision: I made most of the OP about ring signatures. In reality, stealth addressing (i.e. one-time public keys) already provides you with 90% of the privacy you need. Ring signatures are more of a trump card that cannot be broken. But Bitcoin already has manual stealth addressing so the distinguishing technological factor in CryptoNote is the use of ring signatures.
This is why I think having a coin based on CoinJoin is silly: Bitcoin already has some privacy if you care enough. A separate currency needs to go way beyond mediocre privacy improvements and provide true indistinguishably. This is true thanks to ring signatures: you can never break the 1/N probability of guessing correctly. There's no additional circumstantial evidence like with CoinJoin (save for IP addresses, but that's a problem independent of cryptocurrencies)."
(https://bitcointalk.org/index.php?topic=583449.msg6883525#msg6883525)
[Anonymity discussions, specially comparing Monero with Darkcoin and its coinjoin-based solution, keep going on]
-3593: "Transaction fees should be a fixed percentage of the block reward, or at the very least not be controllable by the payer. If payers can optionally pay more then it opens the door for miner discrimination and tx fee bidding wars."
(https://bitcointalk.org/index.php?topic=583449.msg6886770#msg6886770)
[Johnny Mnemonic is a firm defender of fixed fees and tail emission: he see the “fee market” as big danger to the usability of cryptocurrencies]
-3986: (https://bitcointalk.org/index.php?topic=583449.msg6930412#msg6930412)
[partnership with i2p]
-4373: “Way, way faster version of cpuminer: https://bitcointalk.org/index.php?topic=619373”
(https://bitcointalk.org/index.php?topic=583449.msg6993812#msg6993812)
[super-optimized miner is finally leaked to the public. Now the hashrate is 100 times bigger than originally with crippled miner. The next hedge for "cloud farmers" is GPU mining]
-4877: “1. We have a logo! If you use Monero in any of your projects, you can grab a branding pack here. You can also see it in all its glory right here:
logo […] 4. In order to maintain ISO 4217 compliance, we are changing our ticker symbol from MRO to XMR effective immediately."
(https://bitcointalk.org/index.php?topic=583449.msg7098497#msg7098497)
[Jun 2nd 2014]
-5079: “First GPU miner: https://bitcointalk.org/index.php?topic=638915.0”
(https://bitcointalk.org/index.php?topic=583449.msg7130160#msg7130160)
[4th June: Claymore has developed the first CryptoNight open source and publicly available GPU miner]
-5454: "New update to my miner - up to 25% hash increase. Comment and tell me how much of an increase you got from it: https://bitcointalk.org/index.php?topic=632724"
(https://bitcointalk.org/index.php?topic=583449.msg7198061#msg7198061)
[miner optimization is an endless task]
-5464: "I have posted a proposal for fixed subsidy:
https://bitcointalk.org/index.php?topic=597878.msg7202538#msg7202538"
(https://bitcointalk.org/index.php?topic=583449.msg7202776#msg7202776)
[Nice charts and discussion proposed by tacotime, worth reading it]
-5658: "- New seed nodes added. - Electrum-style deterministic wallets have been added to help in the recovery of your wallet should you ever need to. It is enabled by default."
(https://bitcointalk.org/index.php?topic=583449.msg7234475#msg7234475)
[Now you can recover your wallet with a 24 word seed]
-5726: (https://bitcointalk.org/index.php?topic=583449.msg7240623#msg7240623)
[Bitcoin Pizza in monero version: a 2500 XMR picture sale (today worth ~$20k)]
-6905: (https://bitcointalk.org/index.php?topic=583449.msg7386715#msg7386715)
[Monero missives: CryptoNote peer review starts whitepaper reviewed)]
-7328: (https://bitcointalk.org/index.php?topic=583449.msg7438333#msg7438333)
[android monero widget built]
This is a dense digest of the first several thousand messages on the definitive Monero thread.
A lot of things happened in this stressful days and most are recorded here. It can be summarized in this:
  • 28th April: Othe and zone117x assume the GUI wallet and CN pools tasks.
  • 30th April: First NoodleDoodle's miner optimization.
  • 11th May: First Monero exchanger
  • 13th May: Open source pool code is ready.
  • 16th May: First pool mined block.
  • 19th May: Monero in poloniex
  • 20th May: Monero +1100 bitcoin 24h trading volume in Poloniex.
  • 21st May: New official miner optimization x4 speed (accumulated optimization x12-x16). Open source wolf0's CPU miner released.
  • 25th May: partnership with i2p
  • 28th May: The legendary super-optimized miner is leaked. Currently running x90 original speed. Hedge of the "cloud farmers" is over in the cpu mining.
  • 2nd June: Monero at last has a logo. Ticker symbol changes to the definitive XMR (former MRO)
  • 4th June: Claymore's open source GPU miner.
  • 10th June: Monero's "10,000 bitcoin pizza" (2500 XMR paintig). Deterministic seed-based wallets (recover wallet with a 24 word seed)
  • March 2015 – tail emission added to code
  • March 2016 – monero hard forks to 2 min block and doubles block reward
There basically two things in here that can be used to attack Monero:
  • Crippled miner Gave unfair advantage to those brave enough to risk money and time to optimize and mine Monero.
  • Fast curve emission non-bitcoin-like curve as initially advertised and as it was widely accepted as suitable
Though we have to say two things to support current Monero community and devs:
  • The crippled miner was coded either by Bytecoin or CryptoNote, and 100% solved within a month by Monero community
  • The fast curve emission was a TFT miscalculation. He forgot to consider that as he was halving the block target he was unintentionally doubling the emission rate.
submitted by el_hispano to Monero [link] [comments]

SCRY.INFO underlying double chain technology sharing

SCRY.INFO underlying double chain technology sharing
1.Background
In SCRY project, double chain structure is applied in clients. As for signature algorithm, we selected BIP143. In segregated witness, VERSION 0 applied BIP143 signature verification to increase efficiency, but BIP143S algorithm is not applied to general transactions. We have optimized general transaction signature and verification, apply BIP143 signature and verification to increase the efficiency.
1.1Signature algorithm
Bitcoin applied ECDSA (Elliptic Curve Digital Signature Algorithm) as digital signature algorithm. There are 3 use cases of digital signature algorithm in Bitcoin: 1. Signature can verify the owner of private key, the owner of money transferring in that transaction. 2. The proxy verification cannot be denied, that is the transaction cannot be denied. 3. The signature cannot be falsified, that is transaction (or details of transaction) cannot be adjusted by anyone after signature.
There are two parts of digital signature: one is using private key( signature key) to sign the hash of message(transaction), the other one is to allow everyone can verify the signature by provided public key and information.
  • Signature algorithm
The signature algorithm of Bitcoin is as following:
Sig = Fsig( Fhash(m), dA )
Explanation:
dA is private key signature
m is transaction (or part of transaction)
Fhash is hash function
Fsig is signature algorithm
Sig is result signature
There are 2 functions in the whole signature: Fhash and Fsig。
  • Fhash function
Fhash function is to generate Hash of transaction, first serialize the transaction, based on serialized binary data use SHA256 to calculate the transaction Hash. The general transaction (single input and single output) process is as following:
Transaction serialization:
1.nVersion Transaction version
2.InputCount Input count
3.Prevouts Serialize the input UTXO
4.OutputCount Output count
5.outpoint Serialize the output UTXO
6.nLocktime Locked period of transaction
7.Hash Twice SHA256 calculation based on the data above
  • Fsig function
Fsig function signature algorithm is based on ECDSA. There will be a K value every encryption. Based on this K value, the algorithm will generate a temporary public/private key (K,Q), select X axis of public key Q to get a value R, the formula is as following:
S=K-1 *(Hash(m) + dA *R) mod p
Explanation:
K is temporary private key
R is x axis of temporary public key
dA is signature private key
m is transaction data
p is the main sequence of elliptical curve
The function will generate a value S.
In elliptical curve every encryption will generate a K value. Reuse same K value will cause private key exposed, K value should be seriously secured. Bitcoin use FRC6979 TO ensure certainty, use SHA256 to ensure the security of K value. The simple formula is as following:
K =SHA256(dA+HASH(m))
Explanation,
dA is private key,
m is message.
Final signature will be generated with the combination of ( R and S)
  • Signature verification
Verification process is applying signature to generate inverse function, the formula is as following:
P=S-1 *Hash(m)*G +S-1*R*Qa
Explanation:
R and S are signature value
Qa is user(signer)’s public key
m is signed transaction data
G is generator point of elliptical curve
We can see from this formula, based on information (transaction or part of Hash value), public key and signature of signer(R and S value), calculate the P value, the value will be one point on elliptical curve. If the X axis equals R, then the signature is valid.
1.2

Bip143 brief introduction

There are 4 ECDSA (Elliptic Curve Digital Signature Algorithm) signature verification code(sigops):CHECKSIG, CHECKSIGVERIFY, CHECKMULTISIG, CHECKMULTISIGVERIFY. One transaction abstract will be SHA256 encryption twice.There are at least 2 disadvantages in Bitcoin original digital signature digest algorithm:
●Hash used for data verification is consistent with transaction bytes. The computation of signature verification is based on O(N2) time complexity, time for verification is too long, BIP143 optimizes digest algorithm by importing some “intermediate state” which can be duplicate, make the time complexity of signature verification turn into O(n).
●The other disadvantages of original signature: There are no Bitcoin amounts included in signature when having the transaction, it is not a disadvantage for nodes, but for offline transaction signature devices (cold wallet), since the importing amount is not available, causing that the exact amount and transaction fees cannot be calculated. BIP143 has included the amount in every transaction in the signature.
BIP143 defines a new kind of task digest algorithm, the standard is as following:
Transaction serialization
https://preview.redd.it/2b6c5q2mk7b11.png?width=783&format=png&auto=webp&s=eb952782464942b6930bbd2632fbcd0fbaaf5023
1,4,7,9,10 in the list is the same as original SIGHASH algorithm, original SIGHASH type meaning stay the same. The following contains are changed:
  • Serialization method
  • All SIGHASH commit amount for signature
  • FindAndDelete signature is not suitable for scripteCode;
  • AfterOP_CODESEPARATOR(S),OP_CODESEPARATOR will not delete scriptCode( lastOP_CODESEPARATOR will be deleted after every script);
  • SINGLE does not commit input index.When ANYONECANPAY has no setting,the meaning will not be changed,hashPrevouts and outpoint are implicit committed in input index. When SINGLE use ANYONECANPAY, signed input and output will exist in pairs, but have no limitation to index.
2.BIP143 Signature
In go language, we use btcsuite database to finish signature, btcsuite database is an integrated Bitcoin database, it can generate all nodes program of Bitcoin, but we just use btcsuite database public key/private key API, SHA API and sign RFC6979 signature API. In order to avoid redundancy, the following codes have no adjustments to codes.
2.1
Transaction HASH generation
Transaction information hash generation, every input in transaction will generate a hash value, if there are multi-input in the transaction, then a hash array will be generated, every hash in the array will be consistent with input in transaction.
https://preview.redd.it/n0x5bo9cl7b11.png?width=629&format=png&auto=webp&s=63f4951e5ca7d0cffc6e8905f5d4b33354aa6ecc
Like two transaction input in the image above, every transaction will generate a hash, the transaction above will generate two hash.
  • Fhash function
CalcSignatureHash(script []byte, hashType SigHashType, tx *EMsgTx, idx int)
Explanation:
Script,pubscript is input utxo unlocked script
HashType,signature method or signature type
Tx,details of transaction
Idx,Number of transaction, that is to calculate which transaction hash
The following is Fhash code
https://preview.redd.it/e8xx974gl7b11.png?width=506&format=png&auto=webp&s=9a4f419069bea2e76b8d5b7205a31e06692f3f67
For the situation that multi UTXO input in one transaction, for every input, you can deploy it as examples above, then generate a hash array. Before hash generation, you need to clear “SigantureScript”in other inputs, only leave the “SigantureScript” in this input,That is “ScriptSig”field.
https://preview.redd.it/0omhp2ahl7b11.png?width=462&format=png&auto=webp&s=4cee9b0e4fe10185a39d68bde1032ac4e4dbb9ad
The amount for every UTXO is different. You need to pay attention to the 6th step, what you need to input is the amount for every transaction
Multi-input function generation
func txHash(tx msgtx) ( *[][]byte)
Code details
https://preview.redd.it/rlnxv3lil7b11.png?width=581&format=png&auto=webp&s=804adbee92a9bb9811a4ffc395601ebf191fd664
Repeat deploy Fhash function(CalcSignatureHash)then you can generate a hash array.
2.2Sign with HASH
A hash array is generated in the methods above, for every input with a unique hash in the data, we use signRFC6979 signature function to sign the hash, here we deploy functions in btcsuite database directly.
signRFC6979(PrivateKey, hash)
Through this function, we can generate SigantureScript,add this value to every input SigantureScript field in the transaction.
2.3Multisig
Briefly, multi-sig technology is the question that one UTXO should be signed with how many private keys. There is one condition in script, N public keys are recorded in script, at least M public keys must provide signature to unlock the asset. That is also called M-N method, N is the amount of private keys, M is the signature amount needed for verification
The following is how to realize a 2-2 multisig based on P2SH(Pay-to-Script-Hash) script with go language.
2-2 codes of script function generation:
https://preview.redd.it/7dq7cv9kl7b11.png?width=582&format=png&auto=webp&s=108c6278d656e5fa6b51b5876d5a0f7a1231f933
The function above generated script in the following
2 2 OP_C HECKMULTISIG
Signature function
1. Based on transaction TX,it includes input array []TxIn,generate transaction HASH array,this process is the same as process in general transaction above, deploy the digest function of general transaction above.
func txHash(tx msgtx) ( *[][]byte)
this function generated a hash array, that is every transaction input is consistent with one hash value.
2. Use first public key in redeem script, sign with consistent private key. The process is as general transaction.
signRFC6979(PrivateKey, hash)
After signature, the signature array SignatureScriptArr1 with every single input is generated. Based on this signature value in the array, you can update every input TxIn "SigantureScript" field in transaction TX.
3.Based on updated TX deploy txHash function again, generate new hash array.
func txHash(tx msgtx) ( *[][]byte)
4. Use second public key in redeem script, the consistent private key is used for signature. Use the updated TX in the process above, generate every input hash and sign it.
signRFC6979(PrivateKey, hash)
//Combine the signature generated by first key, signature generated by secondkey and redeem script.
etxscript.EncodeSigScript(&(TX.TxIn[i].SignatureScript),&SigHash2, pkScript)
There are N transactions, so repeat it N times.
The final data is as following:
https://preview.redd.it/78aabhqll7b11.png?width=558&format=png&auto=webp&s=453f7129b2cf3c648b68c2369a4622963087d0c8
References
https://en.wikipedia.org/wiki/Digital_signature*
https://github.com/bitcoin/bips/blob/mastebip-0143.mediawiki
《OReilly.Mastering.Bitcoin.2nd.Edition》
http://www.8btc.com/rfc6979
submitted by StephenCuuuurry to SCRYDDD [link] [comments]

[bitcoin-dev] Draft BIP : fixed-schedule block size increase

Message: 5 Date: Mon, 22 Jun 2015 14:18:19 -0400 From: Gavin Andresen To: Johnathan Corgan Cc: [email protected] Subject: [bitcoin-dev] Draft BIP : fixed-schedule block size increase
I promised to write a BIP after I'd implemented increase-the-maximum-block-size code, so here it is. It also lives at: https://github.com/gavinandresen/bips/blob/blocksize/bip-8MB.mediawiki
I don't expect any proposal to please everybody; there are unavoidable tradeoffs to increasing the maximum block size. I prioritize implementation simplicity -- it is hard to write consensus-critical code, so simpler is better.
BIP: ?? Title: Increase Maximum Block Size Author: Gavin Andresen Status: Draft Type: Standards Track Created: 2015-06-22
==Abstract==
This BIP proposes replacing the fixed one megabyte maximum block size with a maximum size that grows over time at a predictable rate.
==Motivation==
Transaction volume on the Bitcoin network has been growing, and will soon reach the one-megabyte-every-ten-minutes limit imposed by the one megabyte maximum block size. Increasing the maximum size reduces the impact of that limit on Bitcoin adoption and growth.
==Specification==
After deployment on the network (see the Deployment section for details), the maximum allowed size of a block on the main network shall be calculated based on the timestamp in the block header.
The maximum size shall be 8,000,000 bytes at a timestamp of 2016-01-11 00:00:00 UTC (timestamp 1452470400), and shall double every 63,072,000 seconds (two years, ignoring leap years), until 2036-01-06 00:00:00 UTC (timestamp 2083190400). The maximum size of blocks in between doublings will increase linearly based on the block's timestamp. The maximum size of blocks after 2036-01-06 00:00:00 UTC shall be 8,192,000,000 bytes.
Expressed in pseudo-code, using integer math:
function max_block_size(block_timestamp): time_start = 1452470400 time_double = 60*60*24*365*2 size_start = 8000000 if block_timestamp >= time_start+time_double*10 return size_start * 2^10 // Piecewise-linear-between-doublings growth: time_delta = block_timestamp - t_start doublings = time_delta / time_double remainder = time_delta % time_double interpolate = (size_start * 2^doublings * remainder) / time_double max_size = size_start * 2^doublings + interpolate return max_size 
==Deployment==
Deployment shall be controlled by hash-power supermajority vote (similar to the technique used in BIP34), but the earliest possible activation time is 2016-01-11 00:00:00 UTC.
Activation is achieved when 750 of 1,000 consecutive blocks in the best chain have a version number with bits 3 and 14 set (0x20000004 in hex). The activation time will be the timestamp of the 750'th block plus a two week (1,209,600 second) grace period to give any remaining miners or services time to upgrade to support larger blocks. If a supermajority is achieved more than two weeks before 2016-01-11 00:00:00 UTC, the activation time will be 2016-01-11 00:00:00 UTC.
Block version numbers are used only for activation; once activation is achieved, the maximum block size shall be as described in the specification section, regardless of the version number of the block.
==Rationale==
The initial size of 8,000,000 bytes was chosen after testing the current reference implementation code with larger block sizes and receiving feedback from miners stuck behind bandwidth-constrained networks (in particular, Chinese miners behind the Great Firewall of China).
The doubling interval was chosen based on long-term growth trends for CPU power, storage, and Internet bandwidth. The 20-year limit was chosen because exponential growth cannot continue forever.
Calculations are based on timestamps and not blockchain height because a timestamp is part of every block's header. This allows implementations to know a block's maximum size after they have downloaded it's header, but before downloading any transactions.
The deployment plan is taken from Jeff Garzik's proposed BIP100 block size increase, and is designed to give miners, merchants, and full-node-running-end-users sufficient time to upgrade to software that supports bigger blocks. A 75% supermajority was chosen so that one large mining pool does not have effective veto power over a blocksize increase. The version number scheme is designed to be compatible with Pieter's Wuille's proposed "Version bits" BIP.
TODO: summarize objections/arguments from http://gavinandresen.ninja/time-to-roll-out-bigger-blocks.
TODO: describe other proposals and their advantages/disadvantages over this proposal.
==Compatibility==
This is a hard-forking change to the Bitcoin protocol; anybody running code that fully validates blocks must upgrade before the activation time or they will risk rejecting a chain containing larger-than-one-megabyte blocks.
Simplified Payment Verification software is not affected, unless it makes assumptions about the maximum depth of a transaction's merkle branch based on the minimum size of a transaction and the maximum block size.
==Implementation==
https://github.com/gavinandresen/bitcoinxt/tree/blocksize_fork
bitcoin-dev mailing list [email protected] https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
End of bitcoin-dev Digest, Vol 1, Issue 13
submitted by bitsko to Bitcoin [link] [comments]

How To Calculate Mining Profit: The Easy COMPLETE Guide! How to calculate Hash power output of your cryptocurrency mining/ cloud mining Bitcoin Hash Calculator How to Calculate Mining Profits Hashshiny.io Cloud Mining 2020 App Bitcoin ETHEREUM LiteCoin Dash 2 years Contract + Referral

Bitcoin is a distributed, worldwide, decentralized digital money. Bitcoins are issued and managed without any central authority whatsoever: there is no government, company, or bank in charge of Bitcoin. You might be interested in Bitcoin if you like cryptography, distributed peer-to-peer systems, or economics. In Bitcoin Core (BTC) proof of work, miners use the transactions of a block and other special identifying data as input to the SHA-256 hash function. To "mine" a block, miners must discover a block that hashes to a digest with a certain number of leading zeros. SHA-256 is a member of the SHA-2 cryptographic hash functions designed by the NSA. SHA stands for Secure Hash Algorithm. Cryptographic hash functions are mathematical operations run on digital data; by comparing the computed "hash" (the output from execution of the algorithm) to a known and expected hash value, a person can determine the data's integrity. Bitcoin's hash rate, a measure of the computing power being directed at the bitcoin network, climbed to a fresh all-time high ahead of the halving as miners tried to squeeze as many bitcoin from Because RIPEMD160 produces a 160 bit (20 byte) digest, which is smaller than the original public key (65 bytes uncompressed, 33 bytes compressed).. This means that the eventual address we create from it will contain fewer characters than a full public key, making easier to pass around.. The reason that we use it in conjunction with SHA256 is because RIPEMD160 is not the strongest hash function

[index] [30045] [17829] [2633] [20709] [168] [11484] [8438] [22128] [18395] [30150]

How To Calculate Mining Profit: The Easy COMPLETE Guide!

In this video I cover how to calculate mining profitability and important factors to consider. New To Crypto? Buy $100 of BTC and get $10 Free! https://www.c... What the Hash? - How Bitcoin and Blockchains use Hash Functions - Duration: 5:55. Blockmatics 16,977 views. 5:55. SHA-256 The Center Of Bitcoin - Andreas M. Antonopoulos - Duration: 1:22:48. You can see dashboard and hash calculator introducing 100usd on each crypto for 2years contract. ... Hashshiny.io Cloud Mining 2020 App 📲💻 Bitcoin ETHEREUM LiteCoin Dash 2 years Contract ... By using this calculator we can calculatehow mutch BTC amount generate in Hash power output in cryptocurrency mining and cloud mining, ... Bitcoin Mining Roi Calculator 2017 with Genesis Mining! Bitcoin Hash Calculator use to calculate the profitability of Bitcoin and the tool to find good return bitcoin miners to buy. You can easily calculate how many Bitcoins mines with your hash rates ...

Flag Counter