Interpreter (computing): Difference between revisions

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search
imported>Technopat
m Reverted edit by Calliper Callip (talk) to last version by Murray Langton
 
imported>SpaceNerd95251
m lmao wrong java
 
(One intermediate revision by one other user not shown)
Line 1: Line 1:
{{Short description|Program that executes source code without a separate compilation step}}
{{Short description|Software that executes encoded logic}}
{{Needs sources|date=September 2025}}


[[File:W3sDesign Interpreter Design Pattern UML.jpg|thumb|300px|W3sDesign Interpreter Design Pattern [[Unified Modeling Language|UML]]]]
[[File:W3sDesign Interpreter Design Pattern UML.jpg|thumb|300px|W3sDesign Interpreter Design Pattern [[Unified Modeling Language |UML]]]]
{{Program execution}}
{{Program execution}}


In [[computer science]], an '''interpreter''' is a [[computer program]] that directly [[execution (computers)|executes]] instructions written in a [[Programming language|programming]] or [[scripting language]], without requiring them previously to have been [[Compiler|compiled]] into a [[machine language]] program. An interpreter generally uses one of the following strategies for program execution:
In [[computing]], an '''interpreter''' is [[software]] that [[execution (computers)|executes]] [[source code]] without first [[compiling]] it to [[machine code]]. '''Interpreted languages''' differ from [[compiled languages]], which involve the translation of source code into [[CPU]]-native [[executable code]]. Depending on the [[runtime environment]], interpreters may first translate the source code to an intermediate format, such as [[bytecode]]. Hybrid runtime environments may also translate the bytecode into machine code via [[just-in-time compilation]], as in the case of [[.NET]] and [[Java (programming language)|Java]], instead of interpreting the bytecode directly.


# [[Parse]] the [[source code]] and perform its behavior directly;
Before the widespread adoption of interpreters, the execution of [[computer programs]] often relied on [[compilers]], which translate and compile source code into machine code. Early runtime environments for [[Lisp programming language|Lisp]] and [[BASIC interpreter|BASIC]] could parse source code directly. Thereafter, runtime environments  were developed for languages (such as [[Perl]], [[Raku (programming language)|Raku]], [[Python (programming language)|Python]], [[MATLAB]], and [[Ruby (programming language)|Ruby]]), which translated source code into an intermediate format before executing to enhance [[runtime performance]].
# [[Translator (computing)|Translate]] source code into some efficient [[intermediate representation]] or [[object code]] and immediately execute that;
# Explicitly execute stored precompiled [[bytecode]]<ref>''In this sense, the [[Central processing unit|CPU]] is also an interpreter, of machine instructions.''</ref> made by a [[compiler]] and matched with the interpreter's [[virtual machine]].


Early versions of [[Lisp programming language]] and [[BASIC interpreter|minicomputer and microcomputer BASIC dialects]] would be examples of the first type. [[Perl]], [[Raku (programming language)|Raku]], [[Python (programming language)|Python]], [[MATLAB]], and [[Ruby (programming language)|Ruby]] are examples of the second, while [[UCSD Pascal]] is an example of the third type. Source programs are compiled ahead of time and stored as machine independent code, which is then [[Linker (computing)|linked]] at run-time and executed by an interpreter and/or compiler (for [[Just-in-time compilation|JIT]] systems). Some systems, such as [[Smalltalk]] and contemporary versions of [[BASIC]] and [[Java (programming language)|Java]], may also combine two and three types.<ref>Although this scheme (combining strategy 2 and 3) was used to implement certain BASIC interpreters already in the 1970s, such as the efficient BASIC interpreter of the [[ABC 80]], for instance.</ref> Interpreters of various types have also been constructed for many languages traditionally associated with compilation, such as [[ALGOL|Algol]], [[Fortran]], [[COBOL|Cobol]], [[C (programming language)|C]] and [[C++]].
Code that runs in an interpreter can be run on any platform that has a [[software compatibility|compatible]] interpreter. The same code can be distributed to any such platform, instead of an [[executable]] having to be built for each platform. Although each programming language is usually associated with a particular runtime environment, a language can be used in different environments. Interpreters have been constructed for languages traditionally associated with [[Compiler|compilation]], such as [[ALGOL]], [[Fortran]], [[COBOL]], [[C (programming language)|C]] and [[C++]].


While interpretation and compilation are the two main means by which programming languages are implemented, they are not mutually exclusive, as most interpreting systems also perform some translation work, just like compilers. The terms "[[interpreted language]]" or "[[compiled language]]" signify that the canonical implementation of that language is an interpreter or a compiler, respectively. A [[high-level programming language|high-level language]] is ideally an [[Abstraction (computer science)|abstraction]] independent of particular implementations.
== History==
In the early days of computing, compilers were more commonly found and used than interpreters because hardware at that time could not support both the interpreter and interpreted code and the typical batch environment of the time limited the advantages of interpretation.<ref>{{cite web|title=Why was the first compiler written before the first interpreter?|url=https://arstechnica.com/information-technology/2014/11/why-was-the-first-compiler-written-before-the-first-interpreter/|website=[[Ars Technica]]|date=8 November 2014|access-date=9 November 2014}}</ref>


== History==
Interpreters were used as early as 1952 to ease programming within the limitations of computers at the time (e.g. a shortage of program storage space, or no native support for floating point numbers). Interpreters were also used to translate between low-level machine languages, allowing code to be written for machines that were still under construction and tested on computers that already existed.<ref>{{cite journal |last1=Bennett |first1=J. M. |last2=Prinz |first2=D. G. |last3=Woods |first3=M. L. |title=Interpretative sub-routines |journal=Proceedings of the ACM National Conference, Toronto |date=1952}}</ref> The first interpreted high-level language was [[Lisp (programming language)|Lisp]]. Lisp was first implemented by [[Steve Russell (computer scientist)|Steve Russell]] on an [[IBM 704]] computer. Russell had read [[John McCarthy (computer scientist)|John McCarthy]]'s paper, "Recursive Functions of Symbolic Expressions and Their Computation by Machine, Part I", and realized (to McCarthy's surprise) that the Lisp ''eval'' function could be implemented in machine code.<ref>According to what reported by [[Paul Graham (computer programmer)|Paul Graham]] in ''[[Hackers & Painters]]'', p. 185, McCarthy said: "Steve Russell said, look, why don't I program this ''eval''..., and I said to him, ho, ho, you're confusing theory with practice, this ''eval'' is intended for reading, not for computing. But he went ahead and did it. That is, he compiled the ''eval'' in my paper into [[IBM 704]] machine code, fixing [[Software bug|bug]], and then advertised this as a Lisp interpreter, which it certainly was. So at that point Lisp had essentially the form that it has today..."</ref> The result was a working Lisp interpreter which could be used to run Lisp programs, or more properly, "evaluate Lisp expressions".
Interpreters were used as early as 1952 to ease programming within the limitations of computers at the time (e.g. a shortage of program storage space, or no native support for floating point numbers). Interpreters were also used to translate between low-level machine languages, allowing code to be written for machines that were still under construction and tested on computers that already existed.<ref>{{cite journal |last1=Bennett |first1=J. M. |last2=Prinz |first2=D. G. |last3=Woods |first3=M. L. |title=Interpretative sub-routines |journal=Proceedings of the ACM National Conference, Toronto |date=1952}}</ref> The first interpreted high-level language was [[Lisp (programming language)|Lisp]]. Lisp was first implemented by [[Steve Russell (computer scientist)|Steve Russell]] on an [[IBM 704]] computer. Russell had read [[John McCarthy (computer scientist)|John McCarthy]]'s paper, "Recursive Functions of Symbolic Expressions and Their Computation by Machine, Part I", and realized (to McCarthy's surprise) that the Lisp ''eval'' function could be implemented in machine code.<ref>According to what reported by [[Paul Graham (computer programmer)|Paul Graham]] in ''[[Hackers & Painters]]'', p. 185, McCarthy said: "Steve Russell said, look, why don't I program this ''eval''..., and I said to him, ho, ho, you're confusing theory with practice, this ''eval'' is intended for reading, not for computing. But he went ahead and did it. That is, he compiled the ''eval'' in my paper into [[IBM 704]] machine code, fixing [[Software bug|bug]], and then advertised this as a Lisp interpreter, which it certainly was. So at that point Lisp had essentially the form that it has today..."</ref> The result was a working Lisp interpreter which could be used to run Lisp programs, or more properly, "evaluate Lisp expressions".


The development of editing interpreters was influenced by the need for interactive computing. In the 1960s, the introduction of time-sharing systems allowed multiple users to access a computer simultaneously, and editing interpreters became essential for managing and modifying code in real-time. The first editing interpreters were likely developed for mainframe computers, where they were used to create and modify programs on the fly. One of the earliest examples of an editing interpreter is the EDT (Editor and Debugger for the TECO) system, which was developed in the late 1960s for the PDP-1 computer. EDT allowed users to edit and debug programs using a combination of commands and macros, paving the way for modern text editors and interactive development environments.{{cn|date=October 2024}}
The development of editing interpreters was influenced by the need for interactive computing. In the 1960s, the introduction of time-sharing systems allowed multiple users to access a computer simultaneously, and editing interpreters became essential for managing and modifying code in real-time. The first editing interpreters were likely developed for mainframe computers, where they were used to create and modify programs on the fly. One of the earliest examples of an editing interpreter is the EDT (Editor and Debugger for the TECO) system, which was developed in the late 1960s for the PDP-1 computer. EDT allowed users to edit and debug programs using a combination of commands and macros, paving the way for modern text editors and interactive development environments.{{cn|date=October 2024}}


== General operation ==
==Use==
An interpreter usually consists of a set of known [[Command (computing)|commands]] it can [[Execution (computing)|execute]], and a list of these commands in the order a programmer wishes to execute them. Each command (also known as an [[Instruction (computer science)|Instruction]]) contains the data the programmer wants to mutate, and information on how to mutate the data. For example, an interpreter might read <code>ADD Books, 5</code> and ''interpret'' it as a request to add five to the <code>Books</code> [[Variable (computer science)|variable]].
Notable uses for interpreters include:
 
; Commands and scripts: Interpreters are frequently used to execute [[command-line interface|commands]] and [[script language |scripts]]
 
; [[Virtualization]]: An interpreter acts as a [[virtual machine]] to execute machine code for a hardware architecture different from the one running the interpreter.


Interpreters have a wide variety of instructions which are specialized to perform different tasks, but you will commonly find interpreter instructions for basic [[Operation (mathematics)|mathematical operations]], [[Branch (computer science)|branching]], and [[memory management]], making most interpreters [[Turing completeness|Turing complete]]. Many interpreters are also closely integrated with a [[Garbage collection (computer science)|garbage collector]] and [[debugger]].
; Emulation: An interpreter (virtual machine) can [[emulator |emulate]] another computer system in order to run code written for that system.


== Compilers versus interpreters ==
; [[Sandbox (computer security)|Sandboxing]]: While some types of sandboxes rely on operating system protections, an interpreter (virtual machine) can offer additional control such as blocking code that violates [[computer security |security]] rules.{{citation needed|date=January 2013}}
[[File:Linker.svg|thumb|An illustration of the linking process. Object files and [[static library|static libraries]] are assembled into a new library or executable.]]
Programs written in a [[high-level language]] are either directly executed by some kind of interpreter or converted into [[machine code]] by a compiler (and [[assembler (computing)|assembler]] and [[linker (computing)|linker]]) for the [[CPU]] to execute.


While compilers (and assemblers) generally produce machine code directly executable by computer hardware, they can often (optionally) produce an intermediate form called [[object code]]. This is basically the same machine specific code but augmented with a [[symbol table]] with names and tags to make executable blocks (or modules) identifiable and relocatable. Compiled programs will typically use building blocks (functions) kept in a library of such object code modules. A [[linker (computing)|linker]] is used to combine (pre-made) library files with the object file(s) of the application to form a single executable file. The object files that are used to generate an executable file are thus often produced at different times, and sometimes even by different languages (capable of generating the same object format).
; Self-modifying code: [[Self-modifying code]] can be implemented in an interpreted language. This relates to the origins of interpretation in Lisp and [[artificial intelligence]] research.{{citation needed|date=January 2013}}


A simple interpreter written in a low-level language (e.g. [[assembly language|assembly]]) may have similar machine code blocks implementing functions of the high-level language stored, and executed when a function's entry in a look up table points to that code. However, an interpreter written in a high-level language typically uses another approach, such as generating and then walking a [[parse tree]], or by generating and executing intermediate software-defined instructions, or both.
==Efficiency==
Interpretive overhead is the runtime cost of executing code via an interpreter instead of as native (compiled) code. Interpreting is slower because the interpreter executes multiple machine-code instructions for the equivalent functionality in the native code. In particular, access to variables is slower in an interpreter because the mapping of identifiers to storage locations must be done repeatedly at run-time rather than at [[compile time]].<ref name="FOLDOC" /> But faster development (due to factors such as shorter edit-build-run cycle) can outweigh the value of faster execution speed; especially when prototyping and testing when the edit-build-run cycle is frequent.<ref name="FOLDOC">{{FOLDOC|Interpreter}}</ref><ref>{{Cite web |title=Compilers vs. interpreters: explanation and differences |url=https://www.ionos.com/digitalguide/websites/web-development/compilers-vs-interpreters/ |access-date=2022-09-16 |website=IONOS Digital Guide |language=en}}</ref>


Thus, both compilers and interpreters generally turn source code (text files) into tokens, both may (or may not) generate a parse tree, and both may generate immediate instructions (for a [[stack machine]], [[Three-address code|quadruple code]], or by other means). The basic difference is that a compiler system, including a (built in or separate) linker, generates a stand-alone ''machine code'' program, while an interpreter system instead ''performs'' the actions described by the high-level program.
An interpreter may generate an [[intermediate representation]] (IR) of the program from source code in order to achieve goals such as fast runtime performance. A compiler may also generate an IR, but the compiler generates machine code for later execution whereas the interpreter prepares to execute the program. These differing goals lead to differing IR design. Many [[BASIC]] interpreters replace [[keyword (computer programming)|keyword]]s with single [[byte]] [[Token threading |tokens]] which can be used to find the instruction in a [[jump table]].<ref name="FOLDOC" /> A few interpreters, such as the [[PBASIC]] interpreter, achieve even higher levels of program compaction by using a bit-oriented rather than a byte-oriented program memory structure, where commands tokens occupy perhaps 5 bits, nominally "16-bit" constants are stored in a [[variable-length code]] requiring 3, 6, 10, or 18 bits, and address operands include a "bit offset". Many BASIC interpreters can store and read back their own tokenized internal representation.


A compiler can thus make almost all the conversions from source code semantics to the machine level once and for all (i.e. until the program has to be changed) while an interpreter has to do ''some'' of this conversion work every time a statement or function is executed. However, in an efficient interpreter, much of the translation work (including analysis of types, and similar) is factored out and done only the first time a program, module, function, or even statement, is run, thus quite akin to how a compiler works. However, a compiled program still runs much faster, under most circumstances, in part because compilers are designed to optimize code, and may be given ample time for this. This is especially true for simpler high-level languages without (many) dynamic data structures, checks, or [[type checking]].
There are various compromises between the development speed when using an interpreter and the execution speed when using a compiler. Some systems (such as some [[Lisp (programming language)|Lisps]]) allow interpreted and compiled code to call each other and to share variables. This means that once a routine has been tested and debugged under the interpreter it can be compiled and thus benefit from faster execution while other routines are being developed.{{citation needed|date=January 2013}}
In traditional compilation, the executable output of the [[Linker (computing)|linkers]] (.exe files or .dll files or a library, see picture) is typically relocatable when run under a general operating system, much like the object code modules are but with the difference that this relocation is done dynamically at run time, i.e. when the program is loaded for execution. On the other hand, compiled and linked programs for small [[embedded systems]] are typically statically allocated, often hard coded in a [[NOR flash]] memory, as there is often no secondary storage and no operating system in this sense.
Historically, most interpreter systems have had a self-contained editor built in. This is becoming more common also for compilers (then often called an [[Integrated development environment|IDE]]), although some programmers prefer to use an editor of their choice and run the compiler, linker and other tools manually. Historically, compilers predate interpreters because hardware at that time could not support both the interpreter and interpreted code and the typical batch environment of the time limited the advantages of interpretation.<ref>{{cite web|title=Why was the first compiler written before the first interpreter?|url=https://arstechnica.com/information-technology/2014/11/why-was-the-first-compiler-written-before-the-first-interpreter/|website=[[Ars Technica]]|date=8 November 2014|access-date=9 November 2014}}</ref>


=== Development cycle ===
==Implementation==
During the [[software development cycle]], programmers make frequent changes to source code. When using a compiler, each time a change is made to the source code, they must wait for the compiler to translate the altered source files and [[linker (computing)|link]] all of the binary code files together before the program can be executed. The larger the program, the longer the wait. By contrast, a programmer using an interpreter does a lot less waiting, as the interpreter usually just needs to translate the code being worked on to an intermediate representation (or not translate it at all), thus requiring much less time before the changes can be tested. Effects are evident upon saving the source code and reloading the program. Compiled code is generally less readily debugged as editing, compiling, and linking are sequential processes that have to be conducted in the proper sequence with a proper set of commands. For this reason, many compilers also have an executive aid, known as a [[Make (software)|Makefile]] and program. The Makefile lists compiler and linker command lines and program source code files, but might take a simple command line menu input (e.g. "Make 3") which selects the third group (set) of instructions then issues the commands to the compiler, and linker feeding the specified source code files.
Since the early stages of interpreting and compiling are similar, an interpreter might use the same [[lexical analysis |lexical analyzer]] and [[parser]] as a compiler and then interpret the resulting [[abstract syntax tree]].


=== Distribution ===
==Example==
A [[compiler]] converts source code into binary instruction for a specific processor's architecture, thus making it less [[software portability|portable]]. This conversion is made just once, on the developer's environment, and after that the same binary can be distributed to the user's machines where it can be executed without further translation. A [[cross compiler]] can generate binary code for the user machine even if it has a different processor than the machine where the code is compiled.
An expression interpreter written in [[C++]].


An interpreted program can be distributed as source code. It needs to be translated in each final machine, which takes more time but makes the program distribution independent of the machine's architecture. However, the portability of interpreted source code is dependent on the target machine actually having a suitable interpreter. If the interpreter needs to be supplied along with the source, the overall installation process is more complex than delivery of a monolithic executable, since the interpreter itself is part of what needs to be installed.
<syntaxhighlight lang="cpp">
import std;


The fact that interpreted code can easily be read and copied by humans can be of concern from the point of view of [[copyright]]. However, various systems of [[encryption]] and [[obfuscation]] exist. Delivery of intermediate code, such as bytecode, has a similar effect to obfuscation, but bytecode could be decoded with a [[decompiler]] or [[disassembler]].{{citation needed|date=January 2013}}
using std::runtime_error;
using std::unique_ptr;
using std::variant;


=== Efficiency ===
// data types for abstract syntax tree
The main disadvantage of interpreters is that an interpreted program typically runs more slowly than if it had been [[compiler|compiled]]. The difference in speeds could be tiny or great; often an order of magnitude and sometimes more. It generally takes longer to run a program under an interpreter than to run the compiled code but it can take less time to interpret it than the total time required to compile and run it. This is especially important when prototyping and testing code when an edit-interpret-debug cycle can often be much shorter than an edit-compile-run-debug cycle.<ref name="FOLDOC">{{FOLDOC|Interpreter}}</ref><ref>{{Cite web |title=Compilers vs. interpreters: explanation and differences |url=https://www.ionos.com/digitalguide/websites/web-development/compilers-vs-interpreters/ |access-date=2022-09-16 |website=IONOS Digital Guide |language=en}}</ref>
enum class Kind: char {  
    VAR,
    CONST,
    SUM,
    DIFF,
    MULT,
    DIV,
    PLUS,
    MINUS,
    NOT
};


Interpreting code is slower than running the compiled code because the interpreter must analyze each [[statement (computer science)|statement]] in the program each time it is executed and then perform the desired action, whereas the compiled code just performs the action within a fixed context determined by the compilation. This [[run time (program lifecycle phase)|run-time]] analysis is known as "interpretive overhead". Access to variables is also slower in an interpreter because the mapping of identifiers to storage locations must be done repeatedly at run-time rather than at [[compile time]].<ref name="FOLDOC" />
// forward declaration
class Node;


There are various compromises between the [[development speed]] when using an interpreter and the execution speed when using a compiler. Some systems (such as some [[Lisp (programming language)|Lisps]]) allow interpreted and compiled code to call each other and to share variables. This means that once a routine has been tested and debugged under the interpreter it can be compiled and thus benefit from faster execution while other routines are being developed.{{citation needed|date=January 2013}} Many interpreters do not execute the source code as it stands but convert it into some more compact internal form. Many [[BASIC]] interpreters replace [[keyword (computer programming)|keyword]]s with single [[byte]] [[Token threading|tokens]] which can be used to find the instruction in a [[jump table]].<ref name="FOLDOC" /> A few interpreters, such as the [[PBASIC]] interpreter, achieve even higher levels of program compaction by using a bit-oriented rather than a byte-oriented program memory structure, where commands tokens occupy perhaps 5 bits, nominally "16-bit" constants are stored in a [[variable-length code]] requiring 3, 6, 10, or 18 bits, and address operands include a "bit offset". Many BASIC interpreters can store and read back their own tokenized internal representation.
class Variable {  
public:
    int* memory;
};


{| class="wikitable collapsible collapsed"  style="float:right; text-align:left;"
class Constant {
|-
public:
! Toy [[C (programming language)|C]] expression interpreter
    int value;  
|-
};
| <syntaxhighlight lang="C">
 
// data types for abstract syntax tree
class UnaryOperation {
enum _kind { kVar, kConst, kSum, kDiff, kMult, kDiv, kPlus, kMinus, kNot };
public:
struct _variable { int *memory; };
    unique_ptr<Node> right;
struct _constant { int value; };
};
struct _unaryOperation { struct _node *right; };
 
struct _binaryOperation { struct _node *left, *right; };
class BinaryOperation {  
struct _node {
public:
  enum _kind kind;
    unique_ptr<Node> left;
  union _expression {
    unique_ptr<Node> right;
    struct _variable variable;
};
     struct _constant constant;
 
     struct _binaryOperation binary;
using Expression = variant<Variable, Constant, BinaryOperation, UnaryOperation>;
    struct _unaryOperation unary;
 
  } e;
class Node {
public:
     Kind kind;
     Expression e;
};
};


// interpreter procedure
// interpreter procedure
int executeIntExpression(const struct _node *n) {
[[nodiscard]]
  int leftValue, rightValue;
int executeIntExpression(const Node& n) {
  switch (n->kind) {
    int leftValue;
    case kVar: return *n->e.variable.memory;
    int rightValue;
    case kConst: return n->e.constant.value;
    switch (n->kind) {
    case kSum: case kDiff: case kMult: case kDiv:
        case Kind::VAR:
      leftValue = executeIntExpression(n->e.binary.left);
            return std::get<Variable>(n.e).memory;
      rightValue = executeIntExpression(n->e.binary.right);
        case Kind::CONST:
      switch (n->kind) {
            return std::get<Constant>(n.e).value;
        case kSum: return leftValue + rightValue;
        case Kind::SUM:
        case kDiff: return leftValue - rightValue;
        case Kind::DIFF:
        case kMult: return leftValue * rightValue;
        case Kind::MULT:
        case kDiv: if (rightValue == 0)
        case Kind::DIV:
                    exception("division by zero"); // doesn't return
            const BinaryOperation& bin = std::get<BinaryOperation>(n.e);
                  return leftValue / rightValue;
            leftValue = executeIntExpression(bin.left.get());
      }
            rightValue = executeIntExpression(bin.right.get());
    case kPlus: case kMinus: case kNot:
            switch (n.kind) {
      rightValue = executeIntExpression(n->e.unary.right);
                case Kind::SUM:
      switch (n->kind) {
                    return leftValue + rightValue;
        case kPlus: return + rightValue;
                case Kind::DIFF:
        case kMinus: return - rightValue;
                    return leftValue - rightValue;
        case kNot: return ! rightValue;
                case Kind::MULT:
      }
                    return leftValue * rightValue;
    default: exception("internal error: illegal expression kind");      
                case Kind::DIV:  
  }
                    if (rightValue == 0) {
                        throw runtime_error("Division by zero");
                    }
                    return leftValue / rightValue;
            }
        case Kind::PLUS:
        case Kind::MINUS:  
        case Kind::NOT:
            const UnaryOperation& un = std::get<UnaryOperation>(n.e);
            rightValue = executeIntExpression(un.right.get());
            switch (n.kind) {
                case Kind::PLUS:
                    return +rightValue;
                case Kind::MINUS:
                    return -rightValue;
                case Kind::NOT:
                    return !rightValue;
            }
        default:  
            std::unreachable();
    }
}
}
</syntaxhighlight>
</syntaxhighlight>
|}
An interpreter might well use the same [[lexical analysis|lexical analyzer]] and [[parser]] as the compiler and then interpret the resulting [[abstract syntax tree]]. Example data type definitions for the latter, and a toy interpreter for syntax trees obtained from [[C (programming language)|C]] expressions are shown in the box.


=== Regression ===
==Just-in-time compilation==
Interpretation cannot be used as the sole method of execution: even though an interpreter can itself be interpreted and so on, a directly executed program is needed somewhere at the bottom of the stack because the code being interpreted is not, by definition, the same as the machine code that the CPU can execute.<ref>Theodore H. Romer, Dennis Lee, Geoffrey M. Voelker, Alec Wolman, Wayne A. Wong, Jean-Loup Baer, Brian N. Bershad, and Henry M. Levy, [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.41.2582&rep=rep1&type=pdf The Structure and Performance of Interpreters]</ref><ref>Terence Parr, Johannes Luber, [http://www.antlr.org/wiki/display/ANTLR3/The+difference+between+compilers+and+interpreters The Difference Between Compilers and Interpreters] {{Webarchive|url=https://web.archive.org/web/20140106012828/http://www.antlr.org/wiki/display/ANTLR3/The+difference+between+compilers+and+interpreters |date=2014-01-06 }}</ref>
[[Just-in-time compilation |Just-in-time (JIT) compilation]] is the process of converting an intermediate format (i.e. bytecode) to native code at runtime. As this results in native code execution, it is a method of avoiding the runtime cost of using an interpreter while maintaining some of the benefits that lead to the development of interpreters.


== Variations ==
==Variations==
=== {{anchor|Compreter}}Bytecode interpreters ===
; [[Control table]] interpreter: Logic is specified as data formatted as a table.  
{{main|Bytecode}}
There is a spectrum of possibilities between interpreting and compiling, depending on the amount of analysis performed before the program is executed. For example, [[Emacs Lisp]] is compiled to [[bytecode]], which is a highly compressed and optimized representation of the Lisp source, but is not machine code (and therefore not tied to any particular hardware). This "compiled" code is then interpreted by a bytecode interpreter (itself written in [[C (programming language)|C]]). The compiled code in this case is machine code for a [[virtual machine]], which is implemented not in hardware, but in the bytecode interpreter. Such compiling interpreters are sometimes also called ''compreters''.<ref name="Kühnel_1987_Kleincomputer">{{cite book |editor-first1=Rainer |editor-last1=Erlekampf |editor-first2=Hans-Joachim |editor-last2=Mönk |author-first=Claus |author-last=Kühnel |page=222 |title=Mikroelektronik in der Amateurpraxis |trans-title=Micro-electronics for the practical amateur |chapter=4. Kleincomputer - Eigenschaften und Möglichkeiten |trans-chapter=4. Microcomputer - Properties and possibilities |publisher={{ill|Militärverlag der Deutschen Demokratischen Republik|de}}, Leipzig |location=Berlin |date=1987 |orig-year=1986 |edition=3 |language=de |isbn=3-327-00357-2 |id=7469332}}</ref><ref name="Heyne_1984_Compreter">{{cite journal |title=Basic-Compreter für U880 |trans-title=BASIC compreter for U880 (Z80) |author-first=R. |author-last=Heyne |journal={{ill|radio-fernsehn-elektronik|de|Radio Fernsehen Elektronik}} |language=de |date=1984 |volume=1984 |issue=3 |pages=150–152}}</ref> In a bytecode interpreter each instruction starts with a byte, and therefore bytecode interpreters have up to 256 instructions, although not all may be used. Some bytecodes may take multiple bytes, and may be arbitrarily complicated.


[[Control table]]s - that do not necessarily ever need to pass through a compiling phase - dictate appropriate algorithmic [[control flow]] via customized interpreters in similar fashion to bytecode interpreters.
; {{anchor|Compreter}}Bytecode interpreter: Some interpreters process [[bytecode]] which is an intermediate format of logic compiled from a high-level language. For example, [[Emacs Lisp]] is compiled to bytecode which is interpreted by an interpreter. One might say that this compiled code is machine code for a virtual machine {{endash}} implemented by the interpreter. Such an interpreter is sometimes called a ''compreter''.<ref name="Kühnel_1987_Kleincomputer">{{cite book |editor-first1=Rainer |editor-last1=Erlekampf |editor-first2=Hans-Joachim |editor-last2=Mönk |author-first=Claus |author-last=Kühnel |page=222 |title=Mikroelektronik in der Amateurpraxis |trans-title=Micro-electronics for the practical amateur |chapter=4. Kleincomputer - Eigenschaften und Möglichkeiten |trans-chapter=4. Microcomputer - Properties and possibilities |publisher={{ill|Militärverlag der Deutschen Demokratischen Republik|de}}, Leipzig |location=Berlin |date=1987 |orig-year=1986 |edition=3 |language=de |isbn=3-327-00357-2 |id=7469332}}</ref><ref name="Heyne_1984_Compreter">{{cite journal |title=Basic-Compreter für U880 |trans-title=BASIC compreter for U880 (Z80) |author-first=R. |author-last=Heyne |journal={{ill|radio-fernsehn-elektronik|de|Radio Fernsehen Elektronik}} |language=de |date=1984 |volume=1984 |issue=3 |pages=150–152}}</ref>


=== Threaded code interpreters ===
; Threaded code interpreter: A [[threaded code]] interpreter is similar to bytecode interpreter but instead of bytes, uses pointers. Each instruction is a word that points to a function or an instruction sequence, possibly followed by a parameter. The threaded code interpreter either loops fetching instructions and calling the functions they point to, or fetches the first instruction and jumps to it, and every instruction sequence ends with a fetch and jump to the next instruction. One example of threaded code is the [[Forth (programming language)|Forth]] code used in [[Open Firmware]] systems. The source language is compiled into "F code" (a bytecode), which is then interpreted by a [[virtual machine]].{{citation needed|date=January 2013}}
{{main|Threaded code}}
Threaded code interpreters are similar to bytecode interpreters but instead of bytes they use pointers. Each "instruction" is a word that points to a function or an instruction sequence, possibly followed by a parameter. The threaded code interpreter either loops fetching instructions and calling the functions they point to, or fetches the first instruction and jumps to it, and every instruction sequence ends with a fetch and jump to the next instruction. Unlike bytecode there is no effective limit on the number of different instructions other than available memory and address space. The classic example of threaded code is the [[Forth (programming language)|Forth]] code used in [[Open Firmware]] systems: the source language is compiled into "F code" (a bytecode), which is then interpreted by a [[virtual machine]].{{citation needed|date=January 2013}}


=== Abstract syntax tree interpreters ===
; Abstract syntax tree interpreter: An abstract syntax tree interpreter transforms source code into an [[abstract syntax tree]] (AST), then interprets it directly, or uses it to generate native code via JIT compilation.<ref>[http://lambda-the-ultimate.org/node/716 AST intermediate representations], Lambda the Ultimate forum</ref> In this approach, each sentence needs to be parsed just once. As an advantage over bytecode, AST keeps the global program structure and relations between statements (which is lost in a bytecode representation), and when compressed provides a more compact representation.<ref name="KistlerFranz1999">{{cite journal
{{main|Abstract syntax tree}}
In the spectrum between interpreting and compiling, another approach is to transform the source code into an optimized abstract syntax tree (AST), then execute the program following this tree structure, or use it to generate native code [[Just-in-time compilation|just-in-time]].<ref>[http://lambda-the-ultimate.org/node/716 AST intermediate representations], Lambda the Ultimate forum</ref> In this approach, each sentence needs to be parsed just once. As an advantage over bytecode, the AST keeps the global program structure and relations between statements (which is lost in a bytecode representation), and when compressed provides a more compact representation.<ref name="KistlerFranz1999">{{cite journal
|last1=Kistler |first1=Thomas
|last1=Kistler |first1=Thomas
|last2=Franz |first2=Michael |author-link2=Michael Franz
|last2=Franz |first2=Michael |author-link2=Michael Franz
Line 139: Line 169:
|url=http://oberon2005.oberoncore.ru/paper/mf1997a.pdf
|url=http://oberon2005.oberoncore.ru/paper/mf1997a.pdf
|access-date=2020-12-20
|access-date=2020-12-20
}}</ref> Thus, using AST has been proposed as a better intermediate format for just-in-time compilers than bytecode. Also, it allows the system to perform better analysis during runtime.
}}</ref> Thus, using AST has been proposed as a better intermediate format than bytecode. However, for interpreters, AST results in more overhead than a bytecode interpreter, because of nodes related to syntax performing no useful work, of a less sequential representation (requiring traversal of more pointers) and of overhead visiting the tree.<ref>[http://webkit.org/blog/189/announcing-squirrelfish/ Surfin' Safari - Blog Archive » Announcing SquirrelFish]. Webkit.org (2008-06-02). Retrieved on 2013-08-10.</ref>
 
However, for interpreters, an AST causes more overhead than a bytecode interpreter, because of nodes related to syntax performing no useful work, of a less sequential representation (requiring traversal of more pointers) and of overhead visiting the tree.<ref>[http://webkit.org/blog/189/announcing-squirrelfish/ Surfin' Safari - Blog Archive » Announcing SquirrelFish]. Webkit.org (2008-06-02). Retrieved on 2013-08-10.</ref>
 
=== Just-in-time compilation ===
{{main|Just-in-time compilation}}
Further blurring the distinction between interpreters, bytecode interpreters and compilation is just-in-time (JIT) compilation, a technique in which the intermediate representation is compiled to native [[machine code]] at runtime. This confers the efficiency of running native code, at the cost of startup time and increased memory use when the bytecode or AST is first compiled. The earliest published JIT compiler is generally attributed to work on [[Lisp (programming language)|LISP]] by [[John McCarthy (computer scientist)|John McCarthy]] in 1960.{{sfn|Aycock|2003|loc=2. JIT Compilation Techniques, 2.1 Genesis, p. 98}} [[Adaptive optimization]] is a complementary technique in which the interpreter profiles the running program and compiles its most frequently executed parts into native code. The latter technique is a few decades old, appearing in languages such as [[Smalltalk]] in the 1980s.<ref>L. Deutsch, A. Schiffman, [http://portal.acm.org/citation.cfm?id=800017.800542 Efficient implementation of the Smalltalk-80 system], Proceedings of 11th POPL symposium, 1984.</ref>
 
Just-in-time compilation has gained mainstream attention amongst language implementers in recent years, with [[Java platform|Java]], the [[.NET Framework]], most modern [[JavaScript]] implementations, and [[Matlab]] now including JIT compilers.{{citation needed|date=January 2013}}
 
===Template Interpreter===
Making the distinction between compilers and interpreters yet again even more vague is a special interpreter design known as a template interpreter. Rather than implement the execution of code by virtue of a large switch statement containing every possible bytecode, while operating on a software stack or a tree walk, a template interpreter maintains a large array of bytecode (or any efficient intermediate representation) mapped directly to corresponding native machine instructions that can be executed on the host hardware as key value pairs (or in more efficient designs, direct addresses to the native instructions),<ref name="auto">{{cite web|url=https://github.com/openjdk/jdk|title=openjdk/jdk|website=GitHub|date=18 November 2021}}</ref><ref>{{cite web|url=https://openjdk.java.net/groups/hotspot/docs/RuntimeOverview.html#Interpreter |title=HotSpot Runtime Overview |publisher=Openjdk.java.net |date= |accessdate=2022-08-06}}</ref> known as a "Template". When the particular code segment is executed the interpreter simply loads or jumps to the opcode mapping in the template and directly runs it on the hardware.<ref>{{Cite news|url=https://metebalci.com/blog/demystifying-the-jvm-jvm-variants-cppinterpreter-and-templateinterpreter/|title=Demystifying the JVM: JVM Variants, Cppinterpreter and TemplateInterpreter|website=metebalci.com}}</ref><ref>{{cite web |title=JVM template interpreter|website=ProgrammerSought|url=https://programmersought.com/article/5521858566/}}</ref> Due to its design, the template interpreter very strongly resembles a just-in-time compiler rather than a traditional interpreter, however it is technically not a JIT due to the fact that it merely translates code from the language into native calls one opcode at a time rather than creating optimized sequences of CPU executable instructions from the entire code segment. Due to the interpreter's simple design of simply passing calls directly to the hardware rather than implementing them directly, it is much faster than every other type, even bytecode interpreters, and to an extent less prone to bugs, but as a tradeoff is more difficult to maintain due to the interpreter having to support translation to multiple different architectures instead of a platform independent virtual machine/stack. To date, the only template interpreter implementations of widely known languages to exist are the interpreter within Java's official reference implementation, the Sun HotSpot Java Virtual Machine,<ref name="auto"/> and the Ignition Interpreter in the Google V8 javascript execution engine.
 
===Self-interpreter===
{{main|Meta-circular evaluator}}
A self-interpreter is a [[programming language]] interpreter written in a programming language which can interpret itself; an example is a [[BASIC programming language|BASIC]] interpreter written in BASIC. Self-interpreters are related to [[Self-hosting (compilers)|self-hosting compiler]]s.
 
If no [[compiler]] exists for the language to be interpreted, creating a self-interpreter requires the implementation of the language in a host language (which may be another programming language or [[Assembler (computing)|assembler]]). By having a first interpreter such as this, the system is [[Bootstrapping (compilers)|bootstrapped]] and new versions of the interpreter can be developed in the language itself. It was in this way that [[Donald Knuth]] developed the TANGLE interpreter for the language [[WEB]] of the de-facto standard [[TeX]] [[typesetting|typesetting system]].
 
Defining a computer language is usually done in relation to an abstract machine (so-called [[operational semantics]]) or as a mathematical function ([[denotational semantics]]). A language may also be defined by an interpreter in which the semantics of the host language is given. The definition of a language by a self-interpreter is not well-founded (it cannot define a language), but a self-interpreter tells a reader about the expressiveness and elegance of a language. It also enables the interpreter to interpret its source code, the first step towards reflective interpreting.
 
An important design dimension in the implementation of a self-interpreter is whether a feature of the interpreted language is implemented with the same feature in the interpreter's host language. An example is whether a [[closure (computer science)|closure]] in a [[Lisp programming language|Lisp]]-like language is implemented using closures in the interpreter language or implemented "manually" with a data structure explicitly storing the environment. The more features implemented by the same feature in the host language, the less control the programmer of the interpreter has; for example, a different behavior for dealing with number overflows cannot be realized if the arithmetic operations are delegated to corresponding operations in the host language.
 
Some languages such as [[Lisp programming language|Lisp]] and [[Prolog]] have elegant self-interpreters.<ref>Bondorf, Anders. "[https://web.archive.org/web/20181112101324/https://pdfs.semanticscholar.org/a089/c5ae66c3311b45de0aaddfa457e4eb821316.pdf Logimix: A self-applicable partial evaluator for Prolog]." Logic Program Synthesis and Transformation. Springer, London, 1993. 214-227.</ref> Much research on self-interpreters (particularly reflective interpreters) has been conducted in the [[Scheme (programming language)|Scheme programming language]], a dialect of Lisp. In general, however, any [[Turing completeness|Turing-complete]] language allows writing of its own interpreter. Lisp is such a language, because Lisp programs are lists of symbols and other lists. XSLT is such a language, because XSLT programs are written in XML. A sub-domain of [[metaprogramming]] is the writing of [[domain-specific language]]s (DSLs).
 
Clive Gifford introduced<ref>{{cite web |last1=Gifford |first1=Clive |title=Eigenratios of Self-Interpreters |url=http://eigenratios.blogspot.com/2006/11/wanted-eigenratios-of-brainfck-self.html |website=Blogger |access-date=10 November 2019}}</ref> a measure quality of self-interpreter (the eigenratio), the limit of the ratio between computer time spent running a stack of ''N'' self-interpreters and time spent to run a stack of {{nowrap|''N'' − 1}} self-interpreters as ''N'' goes to infinity. This value does not depend on the program being run.
 
The book ''[[Structure and Interpretation of Computer Programs]]'' presents examples of [[meta-circular evaluator|meta-circular interpretation]] for Scheme and its dialects. Other examples of languages with a self-interpreter are [[Forth (programming language)|Forth]] and [[Pascal (programming language)|Pascal]].
 
=== Microcode ===
{{main|Microcode}}
Microcode is a very commonly used technique "that imposes an interpreter between the hardware and the architectural level of a computer".<ref name=Kent2813>{{cite book |last1=Kent |first1=Allen |last2=Williams |first2=James G. |title=Encyclopedia of Computer Science and Technology: Volume 28 - Supplement 13 |date=April 5, 1993 |publisher=Marcel Dekker, Inc |location=New York |isbn=0-8247-2281-7 |url=https://books.google.com/books?id=EjWV8J8CQEYC |access-date=Jan 17, 2016}}</ref> As such, the microcode is a layer of hardware-level instructions that implement higher-level [[machine code]] instructions or internal [[state machine]] sequencing in many [[digital processing]] elements.  Microcode is used in general-purpose [[central processing unit]]s, as well as in more specialized processors such as [[microcontroller]]s, [[digital signal processor]]s, [[Channel I/O|channel controllers]], [[disk controller]]s, [[network interface controller]]s, [[network processor]]s, [[graphics processing unit]]s, and in other hardware.
 
Microcode typically resides in special high-speed memory and translates machine instructions, [[state machine]] data or other input into sequences of detailed circuit-level operations. It separates the machine instructions from the underlying [[electronics]] so that instructions can be designed and altered more freely. It also facilitates the building of complex multi-step instructions, while reducing the complexity of computer circuits. Writing microcode is often called '''microprogramming''' and the microcode in a particular processor implementation is sometimes called a '''microprogram'''.
 
More extensive microcoding allows small and simple [[microarchitecture]]s to [[Emulator|emulate]] more powerful architectures with wider [[word length]], more [[execution unit]]s and so on, which is a relatively simple way to achieve software compatibility between different products in a processor family.


=== Computer processor ===
; Template interpreter: Rather than implement the execution of code by virtue of a large switch statement containing every possible bytecode, while operating on a software stack or a tree walk, a template interpreter maintains a large array of bytecode (or any efficient intermediate representation) mapped directly to corresponding native machine instructions that can be executed on the host hardware as key value pairs (or in more efficient designs, direct addresses to the native instructions),<ref name="auto">{{cite web|url=https://github.com/openjdk/jdk|title=openjdk/jdk|website=GitHub|date=18 November 2021}}</ref><ref>{{cite web|url=https://openjdk.java.net/groups/hotspot/docs/RuntimeOverview.html#Interpreter |title=HotSpot Runtime Overview |publisher=Openjdk.java.net |date= |accessdate=2022-08-06}}</ref> known as a "Template". When the particular code segment is executed the interpreter simply loads or jumps to the opcode mapping in the template and directly runs it on the hardware.<ref>{{Cite news|url=https://metebalci.com/blog/demystifying-the-jvm-jvm-variants-cppinterpreter-and-templateinterpreter/|title=Demystifying the JVM: JVM Variants, Cppinterpreter and TemplateInterpreter|website=metebalci.com}}</ref><ref>{{cite web |title=JVM template interpreter|website=ProgrammerSought|url=https://programmersought.com/article/5521858566/}}</ref> Due to its design, the template interpreter very strongly resembles a JIT compiler rather than a traditional interpreter, however it is technically not a JIT due to the fact that it merely translates code from the language into native calls one opcode at a time rather than creating optimized sequences of CPU executable instructions from the entire code segment. Due to the interpreter's simple design of simply passing calls directly to the hardware rather than implementing them directly, it is much faster than every other type, even bytecode interpreters, and to an extent less prone to bugs, but as a tradeoff is more difficult to maintain due to the interpreter having to support translation to multiple different architectures instead of a platform independent virtual machine/stack. To date, the only template interpreter implementations of widely known languages to exist are the interpreter within Java's official reference implementation, the Sun HotSpot Java Virtual Machine,<ref name="auto"/> and the Ignition Interpreter in the Google [[V8 (JavaScript engine)|V8]] JavaScript execution engine.
Even a non microcoding computer processor itself can be considered to be a parsing immediate execution interpreter that is written in a general purpose hardware description language such as [[VHDL]] to create a system that parses the machine code instructions and immediately executes them.


== Applications ==
; Microcode: [[Microcode]] provides an abstraction layer as a hardware interpreter that implements machine code in a lower-level machine code.<ref name=Kent2813>{{cite book |last1=Kent |first1=Allen |last2=Williams |first2=James G. |title=Encyclopedia of Computer Science and Technology: Volume 28 - Supplement 13 |date=April 5, 1993 |publisher=Marcel Dekker, Inc |location=New York |isbn=0-8247-2281-7 |url=https://books.google.com/books?id=EjWV8J8CQEYC |access-date=Jan 17, 2016}}</ref> It separates the high-level machine instructions from the underlying [[electronics]] so that the high-level instructions can be designed and altered more freely. It also facilitates providing complex multi-step instructions, while reducing the complexity of computer circuits.
* Interpreters are frequently used to execute [[command language]]s, and [[glue language]]s since each operator executed in command language is usually an invocation of a complex routine such as an editor or compiler.{{citation needed|date=January 2013}}
* [[Self-modifying code]] can easily be implemented in an interpreted language. This relates to the origins of interpretation in Lisp and [[artificial intelligence]] research.{{citation needed|date=January 2013}}
* [[Virtualization]]. Machine code intended for a hardware architecture can be run using a [[virtual machine]]. This is often used when the intended architecture is unavailable, or among other uses, for running multiple copies.
* [[Sandbox (computer security)|Sandboxing]]: While some types of sandboxes rely on operating system protections, an interpreter or virtual machine is often used. The actual hardware architecture and the originally intended hardware architecture may or may not be the same. This may seem pointless, except that sandboxes are not compelled to actually execute all the instructions the source code it is processing. In particular, it can refuse to execute code that violates any [[computer security|security]] constraints it is operating under.{{citation needed|date=January 2013}}
* [[Emulator]]s for running computer software written for obsolete and unavailable hardware on more modern equipment.


== See also ==
== See also ==
* [[BASIC interpreter]]
* {{Annotated link |Dynamic compilation}}
* [[Command-line interpreter]]
* {{Annotated link |Homoiconicity}}
* [[Compiled language]]
* {{Annotated link |Meta-circular evaluator}}
* [[Dynamic compilation]]
* {{Annotated link |Partial evaluation}}
* [[Homoiconicity]]
* {{Annotated link |Read–eval–print loop}}
* [[Meta-circular evaluator]]
* [[Partial evaluation]]
* [[Read–eval–print loop]]


== References ==
== References ==
Line 205: Line 191:
* [http://www.columbia.edu/acis/history/interpreter.html IBM Card Interpreters] page at Columbia University
* [http://www.columbia.edu/acis/history/interpreter.html IBM Card Interpreters] page at Columbia University
* [https://archive.org/download/TheoreticalFoundationsForPracticaltotallyFunctionalProgramming/33429551_PHD_totalthesis.pdf Theoretical Foundations For Practical 'Totally Functional Programming'] (Chapter 7 especially) Doctoral dissertation tackling the problem of formalising what is an interpreter
* [https://archive.org/download/TheoreticalFoundationsForPracticaltotallyFunctionalProgramming/33429551_PHD_totalthesis.pdf Theoretical Foundations For Practical 'Totally Functional Programming'] (Chapter 7 especially) Doctoral dissertation tackling the problem of formalising what is an interpreter
* [https://www.youtube.com/watch?v=_C5AHaS1mOA Short animation] explaining the key conceptual difference between interpreters and compilers. Archived at [http://ghostarchive.org/varchive/_C5AHaS1mOA ghostarchive.org] on May 9, 2022.


{{Computer science}}
{{Computer science}}

Latest revision as of 21:59, 25 October 2025

Template:Short description Template:Needs sources

File:W3sDesign Interpreter Design Pattern UML.jpg
W3sDesign Interpreter Design Pattern UML

Template:Program execution

In computing, an interpreter is software that executes source code without first compiling it to machine code. Interpreted languages differ from compiled languages, which involve the translation of source code into CPU-native executable code. Depending on the runtime environment, interpreters may first translate the source code to an intermediate format, such as bytecode. Hybrid runtime environments may also translate the bytecode into machine code via just-in-time compilation, as in the case of .NET and Java, instead of interpreting the bytecode directly.

Before the widespread adoption of interpreters, the execution of computer programs often relied on compilers, which translate and compile source code into machine code. Early runtime environments for Lisp and BASIC could parse source code directly. Thereafter, runtime environments were developed for languages (such as Perl, Raku, Python, MATLAB, and Ruby), which translated source code into an intermediate format before executing to enhance runtime performance.

Code that runs in an interpreter can be run on any platform that has a compatible interpreter. The same code can be distributed to any such platform, instead of an executable having to be built for each platform. Although each programming language is usually associated with a particular runtime environment, a language can be used in different environments. Interpreters have been constructed for languages traditionally associated with compilation, such as ALGOL, Fortran, COBOL, C and C++.

History

In the early days of computing, compilers were more commonly found and used than interpreters because hardware at that time could not support both the interpreter and interpreted code and the typical batch environment of the time limited the advantages of interpretation.[1]

Interpreters were used as early as 1952 to ease programming within the limitations of computers at the time (e.g. a shortage of program storage space, or no native support for floating point numbers). Interpreters were also used to translate between low-level machine languages, allowing code to be written for machines that were still under construction and tested on computers that already existed.[2] The first interpreted high-level language was Lisp. Lisp was first implemented by Steve Russell on an IBM 704 computer. Russell had read John McCarthy's paper, "Recursive Functions of Symbolic Expressions and Their Computation by Machine, Part I", and realized (to McCarthy's surprise) that the Lisp eval function could be implemented in machine code.[3] The result was a working Lisp interpreter which could be used to run Lisp programs, or more properly, "evaluate Lisp expressions".

The development of editing interpreters was influenced by the need for interactive computing. In the 1960s, the introduction of time-sharing systems allowed multiple users to access a computer simultaneously, and editing interpreters became essential for managing and modifying code in real-time. The first editing interpreters were likely developed for mainframe computers, where they were used to create and modify programs on the fly. One of the earliest examples of an editing interpreter is the EDT (Editor and Debugger for the TECO) system, which was developed in the late 1960s for the PDP-1 computer. EDT allowed users to edit and debug programs using a combination of commands and macros, paving the way for modern text editors and interactive development environments.Script error: No such module "Unsubst".

Use

Notable uses for interpreters include:

Commands and scripts
Interpreters are frequently used to execute commands and scripts
Virtualization
An interpreter acts as a virtual machine to execute machine code for a hardware architecture different from the one running the interpreter.
Emulation
An interpreter (virtual machine) can emulate another computer system in order to run code written for that system.
Sandboxing
While some types of sandboxes rely on operating system protections, an interpreter (virtual machine) can offer additional control such as blocking code that violates security rules.Script error: No such module "Unsubst".
Self-modifying code
Self-modifying code can be implemented in an interpreted language. This relates to the origins of interpretation in Lisp and artificial intelligence research.Script error: No such module "Unsubst".

Efficiency

Interpretive overhead is the runtime cost of executing code via an interpreter instead of as native (compiled) code. Interpreting is slower because the interpreter executes multiple machine-code instructions for the equivalent functionality in the native code. In particular, access to variables is slower in an interpreter because the mapping of identifiers to storage locations must be done repeatedly at run-time rather than at compile time.[4] But faster development (due to factors such as shorter edit-build-run cycle) can outweigh the value of faster execution speed; especially when prototyping and testing when the edit-build-run cycle is frequent.[4][5]

An interpreter may generate an intermediate representation (IR) of the program from source code in order to achieve goals such as fast runtime performance. A compiler may also generate an IR, but the compiler generates machine code for later execution whereas the interpreter prepares to execute the program. These differing goals lead to differing IR design. Many BASIC interpreters replace keywords with single byte tokens which can be used to find the instruction in a jump table.[4] A few interpreters, such as the PBASIC interpreter, achieve even higher levels of program compaction by using a bit-oriented rather than a byte-oriented program memory structure, where commands tokens occupy perhaps 5 bits, nominally "16-bit" constants are stored in a variable-length code requiring 3, 6, 10, or 18 bits, and address operands include a "bit offset". Many BASIC interpreters can store and read back their own tokenized internal representation.

There are various compromises between the development speed when using an interpreter and the execution speed when using a compiler. Some systems (such as some Lisps) allow interpreted and compiled code to call each other and to share variables. This means that once a routine has been tested and debugged under the interpreter it can be compiled and thus benefit from faster execution while other routines are being developed.Script error: No such module "Unsubst".

Implementation

Since the early stages of interpreting and compiling are similar, an interpreter might use the same lexical analyzer and parser as a compiler and then interpret the resulting abstract syntax tree.

Example

An expression interpreter written in C++.

import std;

using std::runtime_error;
using std::unique_ptr;
using std::variant;

// data types for abstract syntax tree
enum class Kind: char { 
    VAR, 
    CONST, 
    SUM, 
    DIFF, 
    MULT, 
    DIV, 
    PLUS, 
    MINUS, 
    NOT 
};

// forward declaration
class Node;

class Variable { 
public:
    int* memory; 
};

class Constant {
public:
    int value; 
};

class UnaryOperation {
public:
    unique_ptr<Node> right; 
};

class BinaryOperation { 
public:
    unique_ptr<Node> left;
    unique_ptr<Node> right;
};

using Expression = variant<Variable, Constant, BinaryOperation, UnaryOperation>;

class Node {
public:
    Kind kind;
    Expression e;
};

// interpreter procedure
[[nodiscard]]
int executeIntExpression(const Node& n) {
    int leftValue;
    int rightValue;
    switch (n->kind) {
        case Kind::VAR:
            return std::get<Variable>(n.e).memory;
        case Kind::CONST:
            return std::get<Constant>(n.e).value;
        case Kind::SUM:
        case Kind::DIFF:
        case Kind::MULT:
        case Kind::DIV:
            const BinaryOperation& bin = std::get<BinaryOperation>(n.e);
            leftValue = executeIntExpression(bin.left.get());
            rightValue = executeIntExpression(bin.right.get());
            switch (n.kind) {
                case Kind::SUM: 
                    return leftValue + rightValue;
                case Kind::DIFF: 
                    return leftValue - rightValue;
                case Kind::MULT: 
                    return leftValue * rightValue;
                case Kind::DIV: 
                    if (rightValue == 0) {
                        throw runtime_error("Division by zero");
                    }
                    return leftValue / rightValue;
            }
        case Kind::PLUS: 
        case Kind::MINUS: 
        case Kind::NOT:
            const UnaryOperation& un = std::get<UnaryOperation>(n.e);
            rightValue = executeIntExpression(un.right.get());
            switch (n.kind) {
                case Kind::PLUS:
                    return +rightValue;
                case Kind::MINUS:
                    return -rightValue;
                case Kind::NOT: 
                    return !rightValue;
            }
        default: 
            std::unreachable();
    }
}

Just-in-time compilation

Just-in-time (JIT) compilation is the process of converting an intermediate format (i.e. bytecode) to native code at runtime. As this results in native code execution, it is a method of avoiding the runtime cost of using an interpreter while maintaining some of the benefits that lead to the development of interpreters.

Variations

Control table interpreter
Logic is specified as data formatted as a table.
Script error: No such module "anchor".Bytecode interpreter
Some interpreters process bytecode which is an intermediate format of logic compiled from a high-level language. For example, Emacs Lisp is compiled to bytecode which is interpreted by an interpreter. One might say that this compiled code is machine code for a virtual machine Template:Endash implemented by the interpreter. Such an interpreter is sometimes called a compreter.[6][7]
Threaded code interpreter
A threaded code interpreter is similar to bytecode interpreter but instead of bytes, uses pointers. Each instruction is a word that points to a function or an instruction sequence, possibly followed by a parameter. The threaded code interpreter either loops fetching instructions and calling the functions they point to, or fetches the first instruction and jumps to it, and every instruction sequence ends with a fetch and jump to the next instruction. One example of threaded code is the Forth code used in Open Firmware systems. The source language is compiled into "F code" (a bytecode), which is then interpreted by a virtual machine.Script error: No such module "Unsubst".
Abstract syntax tree interpreter
An abstract syntax tree interpreter transforms source code into an abstract syntax tree (AST), then interprets it directly, or uses it to generate native code via JIT compilation.[8] In this approach, each sentence needs to be parsed just once. As an advantage over bytecode, AST keeps the global program structure and relations between statements (which is lost in a bytecode representation), and when compressed provides a more compact representation.[9] Thus, using AST has been proposed as a better intermediate format than bytecode. However, for interpreters, AST results in more overhead than a bytecode interpreter, because of nodes related to syntax performing no useful work, of a less sequential representation (requiring traversal of more pointers) and of overhead visiting the tree.[10]
Template interpreter
Rather than implement the execution of code by virtue of a large switch statement containing every possible bytecode, while operating on a software stack or a tree walk, a template interpreter maintains a large array of bytecode (or any efficient intermediate representation) mapped directly to corresponding native machine instructions that can be executed on the host hardware as key value pairs (or in more efficient designs, direct addresses to the native instructions),[11][12] known as a "Template". When the particular code segment is executed the interpreter simply loads or jumps to the opcode mapping in the template and directly runs it on the hardware.[13][14] Due to its design, the template interpreter very strongly resembles a JIT compiler rather than a traditional interpreter, however it is technically not a JIT due to the fact that it merely translates code from the language into native calls one opcode at a time rather than creating optimized sequences of CPU executable instructions from the entire code segment. Due to the interpreter's simple design of simply passing calls directly to the hardware rather than implementing them directly, it is much faster than every other type, even bytecode interpreters, and to an extent less prone to bugs, but as a tradeoff is more difficult to maintain due to the interpreter having to support translation to multiple different architectures instead of a platform independent virtual machine/stack. To date, the only template interpreter implementations of widely known languages to exist are the interpreter within Java's official reference implementation, the Sun HotSpot Java Virtual Machine,[11] and the Ignition Interpreter in the Google V8 JavaScript execution engine.
Microcode
Microcode provides an abstraction layer as a hardware interpreter that implements machine code in a lower-level machine code.[15] It separates the high-level machine instructions from the underlying electronics so that the high-level instructions can be designed and altered more freely. It also facilitates providing complex multi-step instructions, while reducing the complexity of computer circuits.

See also

References

Template:Reflist

Sources

  • Script error: No such module "Citation/CS1".

External links

Template:Computer science Template:Authority control

  1. Script error: No such module "citation/CS1".
  2. Script error: No such module "Citation/CS1".
  3. According to what reported by Paul Graham in Hackers & Painters, p. 185, McCarthy said: "Steve Russell said, look, why don't I program this eval..., and I said to him, ho, ho, you're confusing theory with practice, this eval is intended for reading, not for computing. But he went ahead and did it. That is, he compiled the eval in my paper into IBM 704 machine code, fixing bug, and then advertised this as a Lisp interpreter, which it certainly was. So at that point Lisp had essentially the form that it has today..."
  4. a b c Template:Talk other
  5. Script error: No such module "citation/CS1".
  6. Script error: No such module "citation/CS1".
  7. Script error: No such module "Citation/CS1".
  8. AST intermediate representations, Lambda the Ultimate forum
  9. Script error: No such module "Citation/CS1".
  10. Surfin' Safari - Blog Archive » Announcing SquirrelFish. Webkit.org (2008-06-02). Retrieved on 2013-08-10.
  11. a b Script error: No such module "citation/CS1".
  12. Script error: No such module "citation/CS1".
  13. Script error: No such module "citation/CS1".
  14. Script error: No such module "citation/CS1".
  15. Script error: No such module "citation/CS1".