Oct 17, 2018 Here is more information on currently available range of blockchain languages. What language is blockchain written in? This question is the gist of understanding programming of blockchain. According to experts, the generic programming language required for such type of systems is Turing complete language. Nov 03, 2017 Introducing a Programming Language so Simple, It “Fits on a T-shirt”. 'Simplicity is a blockchain programming language that is so simple. Ethereum, on the other hand, includes a more expressive and flexible, Turing-complete programming language, which allows for arbitrarily complex smart-contracts in principle. But, in practice.
Ethereum Blockchain
An open-source, public, and blockchain-based distributed computing platform and operating system featuring a smart contract functionality, Ethereum enables distributed applications to be built and executed without any downtime, fraud, control, or interference from a third-party entity. Ethereum is not just a platform but also a Turing-complete programming language running on a blockchain that helps developers publish distributed applications.
One of the big projects around Ethereum is Microsoft’s partnership with ConsenSys which offers Ethereum Blockchain as a Service (EBaaS) on Microsoft Azure to empower enterprise clients and developers with a single click cloud-based blockchain developer environment.
Ethereum Blockchain Size depends solely on implementation. While Parity has an Ethereum Blockchain Size of about 6 GB, Geth is about 11 GB in size. Though total Ethereum Blockchain Size might be 60GB+, in all originality.
Watch this Ethereum Tutorial video
‘Bitcoin is first and foremost a currency; this is one particular application of a blockchain. However, it is far from the only application. To take a past example of a similar situation, e-mail is one particular use of the Internet, and for sure helped popularise it, but there are many others,’ says Dr Gavin Wood, Ethereum Co-founder.
What Does He Intend to Say?
Not only being meant to transfer funds on blockchains, Ethereum has a plethora of applications. Ether is a cryptocurrency whose blockchain is generated in the Ethereum platform. Ether can be transferred between accounts for compensating participant mining nodes for the computations they have performed. As an altcoin, ether has slowly carved a niche for itself in the blockchain domain. Flexibility being at its core, ether is a cryptocurrency that needs no introduction. Wherever there is a need to establish a channel between, participating nodes in a blockchain transaction, ether is your go-to cryptocurrency. Also, Ethereum provides a decentralized virtual machine, the Ethereum Virtual Machine (EVM), which can be leveraged to execute scripts using an international network of public nodes. The virtual machine’s instruction set, as opposed to other blockchain applications, is assumed to be Turing complete.
Moreover, within Ethereum, there is a second type of token that is used to pay miners fees for including transactions in their block. This token, ‘gas’ is an internal transaction pricing mechanism which used to mitigate spam and allocate resources on the network. Every smart contract execution on Ethereum requires a certain amount of gas to be sent along with it to entice miners to put it in the blockchain.
Applications of Ethereum
Utilities of Ethereum Blockchain
Any service which is centralized initially can be decentralized using Ethereum. From services like loans to their intermediary counterparts, Ethereum can revolutionize thousands of processes that exist across different industries and verticals. Now think, if all of them get decentralized, how secure and efficient would every aspect of human existence would become.
Ethereum can also be used to build decentralized autonomous organizations (DAOs). Fully autonomous, and decentralized organizations with no single leader, DAOs are operated by programming codes, on a collection of smart contracts written on the Ethereum blockchain. A DAO is owned by everyone who purchases tokens. However, instead of each token equating to equity shares and ownership, they act as contributions that give people voting rights for a consensus.
Ethereum has recently created a new standard. Termed as the ERC-721 token, this standard is used for tracking unique digital assets. A major use case for these tokens is digital collectibles. The infrastructure of these tokens allows people to prove ownership of scarce digital goods.
No doubt, Ethereum will continue to prove its mantle in the marketplace as it has been doing since it first came to public sight.
Next, find out Blockchain training offer from Intellipaat. Get certified to excell your career.
PreviousNextPosted by3 years ago
Archived
Summary:
(1) Turing-complete languages are fundamentally inappropriate for writing 'smart contracts' - because such languages are inherently undecidable, which makes it impossible to know what a 'smart contract' will do before running it.
(2) We should learn from Wall Street's existing DSLs (domain-specific languages) for financial products and smart contracts, based on declarative and functional languages such as Ocaml and Haskell - instead of doing what the Web 2.0 'brogrammers' behind Solidity did, and what Peter Todd is also apparently embarking upon: ie, ignoring the lessons that Wall Street has already learned, and 'reinventing the wheel', using less-suitable languages such as C++ and JavaScript-like languages (Solidity), simply because they seem 'easier' for the 'masses' to use.
(3) We should also consider using specification languages (to say what a contract does) along with implementation languages (saying how it should do it) - because specifications are higher-level and easier for people to read than implementations which are lower-level meant for machines to run - and also because ecosystems of specification/implementation language pairs (such as Coq/Ocaml) support formal reasoning and verification tools which could be used to mathematically prove that a smart contract's implementation is 'correct' (ie, it satisfies its specification) before even running it.
Details:
When I have more time later, I will hopefully be able to write up a more gentle introduction on all this stuff, providing more explanations, motivations, and examples for laypersons who are interested in getting a feel for the deep subtle mathematical implications at play here in these emerging 'language design wars' around recent proposals to add 'smart contracts' to cryptocurrencies.
Right now I'm just providing this quick heads-up / reminder / warning, alluded to in the title of the OP, with some more pointers to the literature in the links above.
People who already do have a deep understanding of mathematics and its history will get the message right away - by recalling the crisis in the foundations of mathematics which occurred in the early 1900s, involving concepts like Russell's paradox, Gödel's incompleteness theorem, undecidability, Turing completeness, etc.
Turing-complete languages lead to 'undecidable' programs (ie, you cannot figure out what you do until after you run them)
One hint: recall that Gödel's incompleteness theorem proved that any mathematical system which is (Turing)-complete, must also be inconsistentincomplete [hat tip] - that is, in any such system, it must be possible to formulate propositions which are undecidable within that system.
This is related to things like the Halting Problem.
And by the way, Ethereum's concept of 'gas' is not a real solution to the Halting Problem: Yes, running out of 'gas' means that the machine will 'stop' eventually, but this naïve approach does not overcome the more fundamental problems regarding undecidability of programs written using a Turing-complete language.
The take-away is that:
When using any Turing-complete language, it will always be possible for someone (eg, the DAO hacker, or some crook like Bernie Madoff, or some well-meaning but clueless dev from slock.it) to formulate a 'smart contract' whose meaning cannot be determined in advance by merely inspecting the code: ie, it will always be possible to write a smart contract whose meaning can only be determined after running the code.
Take a moment to contemplate the full, deep (and horrifying) implications of all this.
Some of the greatest mathematicians and computer scientists of the 20th century already discovered and definitively proved (much to the consternation most of their less-sophisticated (naïve) colleagues - who nevertheless eventually were forced to come around and begrudgingly agree with them) that:
- Given a 'smart contract' written in a Turing-complete language...
- it is impossible to determine the semantics / behavior of that 'smart contract' in advance, by mere inspection - either by a human, or even by a machine such as a theorem prover or formal reasoning tool (because such tools unfortunately only work on more-restricted languages, not on Turing-complete languages - for info on such more-restricted languages, see further below on 'constructivism' and 'intuitionistic logic').
The horrifying conclusion is that:
- the only way to determine the semantics / behavior of a 'smart contract' is 'after-the-fact' - ie, by actually running it on some machine (eg, the notorious EVM) - and waiting to see what happens (eg, waiting for a hacker to 'steal' tens of millions of dollars - simply because he understood the semantics / behavior of the code better than the developers did.
This all is based on a very, very deep result of mathematics (Gödel's Incompleteness Theorem, as referenced in some of the links above) - which even many mathematicians themselves had a hard time understanding and accepting.
And it is also very, very common for programmers to not understand or accept this deep mathematical result.
Most programmers do not understand the implications of Gödel's incompleteness theorem on Turing-complete languages
As a mathematician first, and a programmer second, I can confirm from my own experience that most programmers do not understand this important mathematical history at all, and its implications - it is simply too subtle or too foreign for them to grasp.
Their understanding of computing is childish, naïve, and simplistic.
They simply view a computer as a marvelous machine which can execute a sequence of instructions in some language (and please note that, for them, that language usually happens to simply 'come with' the machine, so they unquestionably accept whatever language that happens to be - ie, they almost never dive deeper into the subtle concepts of 'language design' itself - a specialized area of theoretical computer science which few of them ever think about).
Paradigms lost
As we've seen, time after time, this failure of most programmers contemplate the deeper implications of 'language design' has has led to the familiar litany of disasters and 'learning experiences' where programmers have slowly abandoned one 'programming paradigm' and moved on to the next, after learning (through bitter experience) certain hard facts and unpleasant, non-intuitive realities which initially escaped their attention when they were simply enjoying the naïve thrill of programming - such as the following:
- GO TO is considered harmful;
- TRY / CATCH / THROW constructs are considered harmful (they're not much better than GO TO in terms of program control flow);
- callbacks in languages like node.js are considered harmful (they result in unreadable spaghetti code, which is totally obviated in more advanced functional languages with monads);
- destructive update / assignment is considered harmful (when compared with immutable data structures - which are by the way essential for parallelism - and we should remember that any cryptocurrency runtime environment will by definition be parallel);
- the procedural / imperative paradigm is considered harmful (when compared with the declarative paradigm);
- even the object-oriented paradigm is starting to be considered harmful (when compared with the pure functional paradigm): this is where many programmers are today, going through the 'epiphany' of moving away from object-oriented languages like C++ or Java, to languages incorporating functional aspects like C# or Scala, or languages which are even more functional such as Haskell, ML, or OCaml;
- more advanced programmers are even starting see that it is considered harmful to not initially write (or, just as bad, to never even get around to writing after the fact) a specification stating 'what' a program is supposed to do, before proceeding to write (or semi-automatically derive) an implementation stating 'how' it should do it (cough, cough - see the 'Bitcoin reference implementation' in the low-level C++ language, with which all other implementations are expected to be '100% bug compatible': this is an utter abomination and disgrace, to expect the 'worldwide ledger' to run on a system which no carefully designed human-readable specification - merely an increasingly spaghetti-code-like implementation which can only be parsed by the inner priesthood of pinheads at Core/Blockstream - and trust me, this is one 'worse is better' situation which they're perfectly comfortable with, because it simply cements their power even further by discouraging the rest of us from examining 'their' code and contributing to 'their' project) - and by the way, the Curry-Howard Isomorphism tells us that providing an implementation without a specification would be just as bad / ridiculous / meaningless / pointless as (ie, it is mathematically equivalent / isomorphic to) stating a proof without stating the theorem that is being proved.
Today, in cryptocurrencies, we are seeing this sad history repeat itself, with plenty of examples of programmers who don't understand these subtle concepts involving the foundations of mathematics - specifically, the mathematical fact (Gödel's Incompleteness Theorem) that any logical system or language which is 'powerful' enough to be 'Turing complete' must also be inconsistent.
The naïve Ethereum people think they've cleverly sidestepped this with the notion of 'gas' but actually all they're doing is cheating with this messy kludge: because simply saying 'we'll arbitrarily make the program stop running at some point' does not make 'smart contracts' written in Ethereum 'decidable' - as we've seen, these contracts can still blow up / go wrong in other ways before they run out of gas.
Peter Todd /u/petertodd might also be an example of this confusion (given his history, my hunch is that he probably is - but I haven't had time to do a thorough investigation yet) - with his recent post proposing smart contracts in Bitcoin based on the lambda calculus.
Basically, the only way to avoid falling into the 'Turing tar-pit' of confusing and misleading semantics / behavior and undecidability will be to use slightly more restricted languages which are carefully designed / selected to not be Turing-complete.
There are plenty of non-Turing-complete lanaguages available to learn from.
One possibility would be to consider languages which are based on intuitionistic logic / constructivism / Martin-Löf's Type theory / Heyting Logic - which is similar to classical Boolean logic except that Heyting Logic rejects the Law of the Excluded Middle.
What all these 'schools of mathematics' have in common is a more restricted and more concrete notion of 'proof', supporting a safer mode of computation, where something is considered 'proven' or 'true' only if you can provide concrete evidence.
By the way, the word 'witness' in 'Segregated Witness' - meaning a proof that has been constructed, to 'witness' the truth of a proposition, or the validity of a block - comes from the realm of constructivism in mathematics.
These languages are somewhat more restricted than Turing-complete languages, but they are still quite expressive and efficient enough to specify nearly any sort of financial rules or 'smart contracts' which we might desire.
In fact, the notion 'smart contracts' is actually not new at all, and a lot of related work has already been done in this area - and, interestingly, it is based mostly on the kinds of 'functional languages' which most of the developers at Core/Blockstream, and at slock.it, are not familiar with (since they are trapped in the imperative paradigm of less-safe procedural languages such as C++ and JavaScript):
Wall Street is already writing DSLs for 'smart contracts' - mostly using functional languages
Check out the many, many languages for smart contracts already being used major financial firms, and notice how most of them are functional (based on Ocaml and Haskell), and not procedural (like C++ and JavaScript):
The lesson to learn here is simple: Just because we are storing our data on a blockchain and running our code on a permissionless distributed network, does not mean that we should ignore the rich, successful history of existing related work on designing financial products and 'smart contracts' which has already been happening on Wall Street using functional languages.
In fact, if we want to run 'smart contracts' on a permissionless distributed concurrent parallel network (rather than on a centralized system), then it actually becomes even more important to use functional and declarative paradigms and immutable data structures supported by languages like Ocaml and Haskell, and avoid the imperative and procedural paradigms involving mutable data structures, which are almost impossible to get right in a distributed concurrent parallel architecture. (See the video 'The Future is Parallel, and the Future of Parallel is Declarative' for an excellent 1-hour explanation of this).
Only non-Turing-complete languages support formal reasoning and verification
Basically, a language which is not Turing complete, but is instead based on the slightly more restricted 'Intuitionistic Logic' or 'Constructivism', satisfies an important property where it is possible to do 'formal reasoning and verification' about any program written in that language.
This is what we need when dealing with financial products and smart contracts: we need to be able to know in advance 'what' the program does (ie, before running it) - which can be done using tools such as formal reasoning and verification and 'correctness proofs' (which are not applicable to Turing-complete languages).
Turing-complete languages for 'smart contracts' are needlessly dangerous because you can't figure out in advance what they do
As the 'language design wars' around cryptocurrencies and 'smart contracts' begin to heat up, we must always insist on using only non-Turing-complete languages which enable us to use the tools of formal reasoning and verification to mathematically prove in advance that a 'smart contract' program actually does 'what' it is supposed to do.
Separating specification from implementation is essential for proving correctness
A specification stating 'what the smart contract does' should ideally be spelled out separately from the implementation stating 'how' it should do it.
In other words, a high-level, more compact & human-readable specification language can be used to mathematically (and in many cases (semi-)automatically) derive (and formally verify - ie, provide a mathematical correctness proof for) the low-level, hard-to-read machine-runnable program in an implementation language, which tell them machine 'how the smart contract does what it does'.
A simple list of 'language design' requirements for smart contracts
The following considerations are important for ensuring safety of smart contracts:
- At a minimum, we should onlynon-Turing-complete languages; and
- Ideally, our toolbox should also allow providing a compact, more human-readable, high-level specification saying 'what the smart contract does', and not just a lengthy, low-level machine-runnable implementation saying 'how it should do it'.
- Finally, wherever possible, we should also use tools for formal reasoning and verification so that we can prove in advance that the 'how' of any smart contract's machine-runnable implementation does indeed satisfy the 'what' of its human-readable specification (However, several people have already pointed out that this idealistic 'reasoning and verification' requirement may probably turn out to be a very challenging task for code intended to run on a permissionless, massively parallel, concurrent, distributed, worldwide computing network - since this is actually cutting-edge research involving non-determinism, as manifested, for example, in the 'game theory' scenarios we often hear about).
So, the requirements for languages for smart contracts should include:
(1) Our language should be non-Turing complete - ie, it should be based instead on 'Intuititionistic Logic' / 'Constructivism';
(2) We should favor declarative languages (and also things like immutable data structures) - because these are the easiest to run on parallel architectures.
(3) Our toolbox should support formal reasoning and verification, allowing us to mathematically prove that a low-level machine-runnable implementationsatisfies its high-level, human-readable specificationbefore we actually run it
Some YouTube videos for further study
There's a video discussing how declarative languages with immutable data structures (such as Haskell, which is pure functional) are a nice 'fit' for parallel programming:
The Future is Parallel, and the Future of Parallel is Declarative
There's also some videos about how Jane Street Capital has been successfully using the language OCaml (which includes functional, object-oriented, and imperative paradigms) to develop financial products:
Why OCaml
Caml Trading
Lessons from history
When I see Peter Todd writing a blog post where he embarks on informally specifying a new language for 'smart contracts for Bitcoin' based on lambda calculus, it makes me shudder and recollect Greenspun's Tenth Rule, which states:
Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp.
Only now, it looks like Peter Todd is going to try to single-handedly re-implement languages like Ocaml and Haskell, and then try to build the same financial DSLs (domain-specific languages) which Wall Street already built on them.
I think a much better approach would be to look show a bit more humility, and a little less of the 'NIH' (not invented here) syndrome, and see what we can learn from the vast amount of existing work in this area - specifically, the DSLs (domain-specific languages) which Wall Street is already using with great success for automating financial products and smart contracts:
And remember, most of that existing work involving DSLs for financial products and smart contracts was done on top of functional languages like Ocaml and Haskell - it was not done on top of imperative languages like C++ and JavaScript (and Solidity, which is 'JavaScript-like' in many ways).
There are reasons for this - and any so-called dev who ignores that vast body of existing, related work is simply a victim of too much ego and too little awareness of the fin-tech giants who have gone before him.
I'm sure Peter Todd is having a wonderful time being geek with all this - and the hordes of suck-ups and wanna-be's who slavishly worship the C++ pinheads at Core/Blockstream will be duly impressed by all his pseudo-mathematical mumbo-jumbo - but this is mere mental masturbation, if it ignores the major amount of related work that's already been done in this area.
Smart contracts for cryptocurrencies should use Wall Street's existing DSLs financial contracts written in Ocaml and Haskell as a starting point. Eventually maybe we could also even use a language like Coq for writing specifications, and proving that the implementations satisfy the specifications. Any so-called 'dev' who fails to acknowledge this previous work is simply not serious.
Ignorance is bliss, and cock-sure Peter Todd is probably merely embarking on a futile mission of hubris by trying to create all this stuff from scratch based on his limited experience as a hackergamer coder coming from the procedural / imperative paradigm, apparently unaware of the decades of related work which have shown that doing provably correct parallel programming is a gargantuan arduous challenge which may very well turn out to be insurmountable at this time.**
Lord help us if this immature, ignorant vandal who wants Bitcoin to fail takes the ignorant followers of rbitcoin and Core down the path of these so-called 'smart contracts' - reinventing decades of work already done on Wall Street and academia using Haskell and Ocaml, as they screw around with 'easier' languages based on C++ and JavaScript.
Further reading
For more discussion about the emerging 'language design wars' around the idea of possibly adding 'smart contracts' to cryptocurrencies, here are some recent links from Reddit:
The bug which the 'DAO hacker' exploited was not 'merely in the DAO itself' (ie, separate from Ethereum). The bug was in Ethereum's language design itself (Solidity / EVM - Ethereum Virtual Machine) - shown by the 'recursive call bug discovery' divulged (and dismissed) on slock.it last week.
![Tutorial Tutorial](/uploads/1/2/5/8/125860645/759588961.png)
Can we please never again put 100m in a contract without formal correctness proofs?
Would the smart formal methods people here mind chiming in with some calm advice on this thread?
101 comments