眼袋大用什么方法消除| 冬至是什么意思| 非礼什么意思| 西亚是什么人种| 丝瓜有什么好处| 唾液分泌过多是什么原因| hpv什么时候检查最好| 埋线有什么好处和坏处| 甲基硫菌灵治什么病| 决明子有什么功效| 学生是什么阶级| ghz是什么意思| 肾虚用什么补最好| 酸奶对人体有什么好处| 下巴长痘是什么原因| 食道好像有东西堵着是什么原因| 骨是什么结构| 图图是什么意思| 高血糖吃什么水果最好| 女生剪什么短发好看| 59岁生日有什么讲究| 失眠用什么药好| 保释金是什么意思| 发炎是什么意思| 术后病人吃什么营养恢复快| 生育津贴是什么| 原始分是什么意思| 扩张是什么意思| 今年37岁属什么生肖| 蒲公英可以和什么一起泡水喝| 今年高温什么时候结束| 什么叫耳石症| 睡觉憋气是什么原因引起的| 什么是穿刺手术| 眼睛痛什么原因| 泌尿系统感染挂什么科| 教唆是什么意思| tfboys是什么意思| 吃饭吧唧嘴有什么说法| 生门是什么意思| 婴儿增强免疫力吃什么| 避讳是什么意思| 宫颈癌什么症状| 脑梗吃什么东西好| 少将是什么级别| 肚脐上面疼是什么原因| 二十四节气分别是什么| 农垦局是什么性质单位| hrs是什么意思| 最近我和你都有一样的心情什么歌| u型压迹是什么意思| 脑白质病变是什么病| 大名鼎鼎的鼎是什么意思| 为什么叫水浒传| 补肾壮阳吃什么| 烟酰胺是什么| 宫颈肥大有什么危害| 出岫是什么意思| 兔儿爷是什么意思| 肌酐低有什么危害| 什么扑鼻成语| 吃什么止血| 万艾可是什么药| 大腿前侧肌肉叫什么| 数字7代表什么意思| 吃茄子有什么好处和坏处| 不感冒是什么意思| 腰闪了挂什么科| 氢什么意思| 眼睛红血丝多是什么原因| 贤妻良母是什么意思| 甲午五行属什么| 血糖高吃什么食物好| 速战速决的意思是什么| 有什么园| 话梅泡水喝有什么好处和坏处| 柏拉图爱情是什么意思| 肺部钙化是什么意思啊| 启五行属什么| 6月5日是什么日子| 什么的拳头| 7.1什么星座| 后脑勺发热是什么原因| 老打嗝是什么原因引起的| 什么止痛药效果最好| 什么是慢性萎缩性胃炎| 火锅油碟是什么油| 南瓜吃多了有什么坏处| 心率用什么字母表示| 令羽读什么| 肥皂剧是什么意思| 头皮屑多是什么原因怎么去除| 为什么喝中药越来越胖| 天雨粟鬼夜哭什么意思| 什么情况需要做肠镜| 子宫肌瘤什么症状| 肯德基为什么叫kfc| 胃寒吃什么药好| 98属什么| 为什么吐后反而舒服了| 橡皮泥能做什么| 三省吾身是什么意思| 男性吃什么增强性功能| 头孢不能和什么食物一起吃| 霍金得了什么病| 四面受敌是什么动物| 去心火喝什么茶好| 暗忖是什么意思| 印度为什么用手吃饭| 逍遥丸什么时候吃| 红棕色是什么颜色| 下面痒用什么药| 三维b片主治什么病| 尿发黄什么原因| 赛博朋克什么意思| 什么叫糙米| 全身大面积湿疹暗示着什么| 脑瘫是什么原因引起的| 16岁属什么| 05年属鸡的是什么命| 孕吐吃什么药| 往事不堪回首是什么意思| 舌头上有红点是什么原因| 七月4号是什么星座| 10月是什么星座| l代表什么| 什么是钙化结节| 12月12是什么星座| 大摇大摆是什么生肖| 王维字什么| 突然腰疼是什么原因| 什么叫水印| 2020年是什么年| 吃完就拉是什么原因| 常染色体是什么| 姐姐的小孩叫什么| 冰心原名叫什么名字| 双肺呼吸音粗是什么意思| 007最新一部叫什么| 当医生学什么专业| 世界八大奇迹分别是什么| 流涎是什么意思| c14阳性 是什么意思| 什么是围绝经期| 慢性荨麻疹吃什么药| 穿刺手术是什么意思| 什么辉煌四字词语| 乘务长是干什么的| 八月是什么星座| 看胰腺挂什么科| 男性前列腺炎吃什么药| 黄斑病变是什么引起的| 水中毒是什么| 时间观念是什么意思| 白子画什么时候爱上花千骨的| 孙膑原名叫什么| 什么样的轮子只转不走| 榴莲壳有什么作用| 乙肝核心抗体偏高是什么意思| 慢性胰腺炎有什么症状| 碧生源减肥茶有什么副作用| 门诊是什么意思| 孩子长个子吃什么有利于长高| 手脱皮用什么药膏最好| 霉菌性阴道炎用什么药| 渴望是什么意思| 左手臂发麻是什么原因| 违拗是什么意思| 710是什么意思| 什么眠什么睡| 什么是肺部腺性肿瘤| a2是什么材质| 什么是可转债| 挚友是指什么的朋友| 上甘岭在什么地方| 巴适什么意思| 处女座和什么星座最配| 或字多两撇是什么字| 原发性肝ca什么意思| 复方石韦胶囊治什么病| 反流性食管炎是什么症状| 忌口是什么意思| 龙跟什么生肖最配| pv是什么材质| 1037年属什么生肖| 鸡飞狗跳是什么意思| 变性淀粉是什么| 一月19日是什么星座| 枝柯是什么意思| 小孩睡觉磨牙齿是什么原因| 大林木是什么数字| 师长是什么意思| 二米饭是什么| 鹅口疮是什么样的图片| 色盲是什么意思| 为什么会有黑眼圈| 猫可以看到什么颜色| 什么能养肝| 一年半载是什么意思| 花椒泡脚有什么好处| 什么是羊蝎子| 子宫内膜增厚有什么影响| 4月30号是什么星座| 腹泻拉稀水吃什么药| 保花保果用什么药最好| 扁的桃子叫什么名字| 50岁是什么之年| 头发长得快是什么原因| 喝蜂蜜水有什么好处和坏处| 郑州有什么玩的| nt检查是什么意思| 小孩疝气看什么科室| 乳腺病人吃什么好| 什么是物理防晒| 中午一点半是什么时辰| 宝宝消化不良吃什么| 五官是什么| 寻常疣用什么药膏| 操刀是什么意思| 时间是什么| 什么是人格| 三观不合是什么意思| 桃子与什么相克| 封豕长蛇是什么意思| 多多益善的意思是什么| 什么情况需要打破伤风针| 屁多是什么情况| 琏是什么意思| 饱经风霜是什么生肖| 穷的生肖指什么生肖| 还债是什么意思| 千锤百炼什么意思| 现在可以种什么农作物| 芸豆是什么| 左下腹疼痛是什么原因| 负氧离子是什么| tax是什么意思| 什么球不能踢| 耽美是什么| 狗狗假孕是什么症状| 家里适合养什么鱼| 隔离霜和防晒霜有什么区别| TOYOTA是什么车| 膀胱湿热吃什么中成药| s标志的运动鞋是什么牌子| 蛋白肉是什么东西做的| 什么叫健康| 便秘吃什么药效果好| 后背痛什么原因| 法国铁塔叫什么| 茭头是什么| 梦见捡到钱是什么征兆| 2011属什么生肖| b像什么| 灰姑娘叫什么名字| 一个虫一个尧念什么| 经常失眠是什么原因| 爱好是什么意思| 转注是什么意思| 萎缩性胃炎能吃什么水果| 动脉抽血是做什么检查| 梅雨季节是什么意思| mu是什么意思| 醋加小苏打有什么作用| 百度Jump to content

三月首日6款新车接连上市 售价5.99

From Wikipedia, the free encyclopedia
百度 玉渊潭游船已全部通过海事部门验收,昨日正式开航。

In compiler design, static single assignment form (often abbreviated as SSA form or simply SSA) is a type of intermediate representation (IR) where each variable is assigned exactly once. SSA is used in most high-quality optimizing compilers for imperative languages, including LLVM, the GNU Compiler Collection, and many commercial compilers.

There are efficient algorithms for converting programs into SSA form. To convert to SSA, existing variables in the original IR are split into versions, new variables typically indicated by the original name with a subscript, so that every definition gets its own version. Additional statements that assign to new versions of variables may also need to be introduced at the join point of two control flow paths. Converting from SSA form to machine code is also efficient.

SSA makes numerous analyses needed for optimizations easier to perform, such as determining use-define chains, because when looking at a use of a variable there is only one place where that variable may have received a value. Most optimizations can be adapted to preserve SSA form, so that one optimization can be performed after another with no additional analysis. The SSA based optimizations are usually more efficient and more powerful than their non-SSA form prior equivalents.

In functional language compilers, such as those for Scheme and ML, continuation-passing style (CPS) is generally used. SSA is formally equivalent to a well-behaved subset of CPS excluding non-local control flow, so optimizations and transformations formulated in terms of one generally apply to the other. Using CPS as the intermediate representation is more natural for higher-order functions and interprocedural analysis. CPS also easily encodes call/cc, whereas SSA does not.[1]

History

[edit]

SSA was developed in the 1980s by several researchers at IBM. Kenneth Zadeck, a key member of the team, moved to Brown University as development continued.[2][3] A 1986 paper introduced birthpoints, identity assignments, and variable renaming such that variables had a single static assignment.[4] A subsequent 1987 paper by Jeanne Ferrante and Ronald Cytron[5] proved that the renaming done in the previous paper removes all false dependencies for scalars.[3] In 1988, Barry Rosen, Mark N. Wegman, and Kenneth Zadeck replaced the identity assignments with Φ-functions, introduced the name "static single-assignment form", and demonstrated a now-common SSA optimization.[6] The name Φ-function was chosen by Rosen to be a more publishable version of "phony function".[3] Alpern, Wegman, and Zadeck presented another optimization, but using the name "static single assignment".[7] Finally, in 1989, Rosen, Wegman, Zadeck, Cytron, and Ferrante found an efficient means of converting programs to SSA form.[8]

Benefits

[edit]

The primary usefulness of SSA comes from how it simultaneously simplifies and improves the results of a variety of compiler optimizations, by simplifying the properties of variables. For example, consider this piece of code:

y := 1
y := 2
x := y

Humans can see that the first assignment is not necessary, and that the value of y being used in the third line comes from the second assignment of y. A program would have to perform reaching definition analysis to determine this. But if the program is in SSA form, both of these are immediate:

y1 := 1
y2 := 2
x1 := y2

Compiler optimization algorithms that are either enabled or strongly enhanced by the use of SSA include:

  • Constant folding – conversion of computations from runtime to compile time, e.g. treat the instruction a=3*4+5; as if it were a=17;
  • Value range propagation[9] – precompute the potential ranges a calculation could be, allowing for the creation of branch predictions in advance
  • Sparse conditional constant propagation – range-check some values, allowing tests to predict the most likely branch
  • Dead-code elimination – remove code that will have no effect on the results
  • Global value numbering – replace duplicate calculations producing the same result
  • Partial-redundancy elimination – removing duplicate calculations previously performed in some branches of the program
  • Strength reduction – replacing expensive operations by less expensive but equivalent ones, e.g. replace integer multiply or divide by powers of 2 with the potentially less expensive shift left (for multiply) or shift right (for divide).
  • Register allocation – optimize how the limited number of machine registers may be used for calculations

Converting to SSA

[edit]

Converting ordinary code into SSA form is primarily a matter of replacing the target of each assignment with a new variable, and replacing each use of a variable with the "version" of the variable reaching that point. For example, consider the following control-flow graph:

An example control-flow graph, before conversion to SSA
An example control-flow graph, before conversion to SSA

Changing the name on the left hand side of "x x - 3" and changing the following uses of x to that new name would leave the program unaltered. This can be exploited in SSA by creating two new variables: x1 and x2, each of which is assigned only once. Likewise, giving distinguishing subscripts to all the other variables yields:

An example control-flow graph, partially converted to SSA
An example control-flow graph, partially converted to SSA

It is clear which definition each use is referring to, except for one case: both uses of y in the bottom block could be referring to either y1 or y2, depending on which path the control flow took.

To resolve this, a special statement is inserted in the last block, called a Φ (Phi) function. This statement will generate a new definition of y called y3 by "choosing" either y1 or y2, depending on the control flow in the past.

An example control-flow graph, fully converted to SSA
An example control-flow graph, fully converted to SSA

Now, the last block can simply use y3, and the correct value will be obtained either way. A Φ function for x is not needed: only one version of x, namely x2 is reaching this place, so there is no problem (in other words, Φ(x2,x2)=x2).

Given an arbitrary control-flow graph, it can be difficult to tell where to insert Φ functions, and for which variables. This general question has an efficient solution that can be computed using a concept called dominance frontiers (see below).

Φ functions are not implemented as machine operations on most machines. A compiler can implement a Φ function by inserting "move" operations at the end of every predecessor block. In the example above, the compiler might insert a move from y1 to y3 at the end of the middle-left block and a move from y2 to y3 at the end of the middle-right block. These move operations might not end up in the final code based on the compiler's register allocation procedure. However, this approach may not work when simultaneous operations are speculatively producing inputs to a Φ function, as can happen on wide-issue machines. Typically, a wide-issue machine has a selection instruction used in such situations by the compiler to implement the Φ function.

Computing minimal SSA using dominance frontiers

[edit]

In a control-flow graph, a node A is said to strictly dominate a different node B if it is impossible to reach B without passing through A first. In other words, if node B is reached, then it can be assumed that A has run. A is said to dominate B (or B to be dominated by A) if either A strictly dominates B or A = B.

A node which transfers control to a node A is called an immediate predecessor of A.

The dominance frontier of node A is the set of nodes B where A does not strictly dominate B, but does dominate some immediate predecessor of B. These are the points at which multiple control paths merge back together into a single path.

For example, in the following code:

[1] x = random()
if x < 0.5
    [2] result = "heads"
else
    [3] result = "tails"
end
[4] print(result)

Node 1 strictly dominates 2, 3, and 4 and the immediate predecessors of node 4 are nodes 2 and 3.

Dominance frontiers define the points at which Φ functions are needed. In the above example, when control is passed to node 4, the definition of result used depends on whether control was passed from node 2 or 3. Φ functions are not needed for variables defined in a dominator, as there is only one possible definition that can apply.

There is an efficient algorithm for finding dominance frontiers of each node. This algorithm was originally described in "Efficiently Computing Static Single Assignment Form and the Control Graph" by Ron Cytron, Jeanne Ferrante, et al. in 1991.[10]

Keith D. Cooper, Timothy J. Harvey, and Ken Kennedy of Rice University describe an algorithm in their paper titled A Simple, Fast Dominance Algorithm:[11]

for each node b
    dominance_frontier(b) := {}
for each node b
    if the number of immediate predecessors of b ≥ 2
        for each p in immediate predecessors of b
            runner := p
            while runner ≠ idom(b)
                dominance_frontier(runner) := dominance_frontier(runner) ∪ { b }
                runner := idom(runner)

In the code above, idom(b) is the immediate dominator of b, the unique node that strictly dominates b but does not strictly dominate any other node that strictly dominates b.

Variations that reduce the number of Φ functions

[edit]

"Minimal" SSA inserts the minimal number of Φ functions required to ensure that each name is assigned a value exactly once and that each reference (use) of a name in the original program can still refer to a unique name. (The latter requirement is needed to ensure that the compiler can write down a name for each operand in each operation.)

However, some of these Φ functions could be dead. For this reason, minimal SSA does not necessarily produce the fewest Φ functions that are needed by a specific procedure. For some types of analysis, these Φ functions are superfluous and can cause the analysis to run less efficiently.

Pruned SSA

[edit]

Pruned SSA form is based on a simple observation: Φ functions are only needed for variables that are "live" after the Φ function. (Here, "live" means that the value is used along some path that begins at the Φ function in question.) If a variable is not live, the result of the Φ function cannot be used and the assignment by the Φ function is dead.

Construction of pruned SSA form uses live-variable information in the Φ function insertion phase to decide whether a given Φ function is needed. If the original variable name isn't live at the Φ function insertion point, the Φ function isn't inserted.

Another possibility is to treat pruning as a dead-code elimination problem. Then, a Φ function is live only if any use in the input program will be rewritten to it, or if it will be used as an argument in another Φ function. When entering SSA form, each use is rewritten to the nearest definition that dominates it. A Φ function will then be considered live as long as it is the nearest definition that dominates at least one use, or at least one argument of a live Φ.

Semi-pruned SSA

[edit]

Semi-pruned SSA form[12] is an attempt to reduce the number of Φ functions without incurring the relatively high cost of computing live-variable information. It is based on the following observation: if a variable is never live upon entry into a basic block, it never needs a Φ function. During SSA construction, Φ functions for any "block-local" variables are omitted.

Computing the set of block-local variables is a simpler and faster procedure than full live-variable analysis, making semi-pruned SSA form more efficient to compute than pruned SSA form. On the other hand, semi-pruned SSA form will contain more Φ functions.

Block arguments

[edit]

Block arguments are an alternative to Φ functions that is representationally identical but in practice can be more convenient during optimization. Blocks are named and take a list of block arguments, notated as function parameters. When calling a block the block arguments are bound to specified values. MLton, Swift SIL, and LLVM MLIR use block arguments.[13]

Converting out of SSA form

[edit]

SSA form is not normally used for direct execution (although it is possible to interpret SSA[14]), and it is frequently used "on top of" another IR with which it remains in direct correspondence. This can be accomplished by "constructing" SSA as a set of functions that map between parts of the existing IR (basic blocks, instructions, operands, etc.) and its SSA counterpart. When the SSA form is no longer needed, these mapping functions may be discarded, leaving only the now-optimized IR.

Performing optimizations on SSA form usually leads to entangled SSA-Webs, meaning there are Φ instructions whose operands do not all have the same root operand. In such cases color-out algorithms are used to come out of SSA. Naive algorithms introduce a copy along each predecessor path that caused a source of different root symbol to be put in Φ than the destination of Φ. There are multiple algorithms for coming out of SSA with fewer copies, most use interference graphs or some approximation of it to do copy coalescing.[15]

Extensions

[edit]

Extensions to SSA form can be divided into two categories.

Renaming scheme extensions alter the renaming criterion. Recall that SSA form renames each variable when it is assigned a value. Alternative schemes include static single use form (which renames each variable at each statement when it is used) and static single information form (which renames each variable when it is assigned a value, and at the post-dominance frontier).

Feature-specific extensions retain the single assignment property for variables, but incorporate new semantics to model additional features. Some feature-specific extensions model high-level programming language features like arrays, objects and aliased pointers. Other feature-specific extensions model low-level architectural features like speculation and predication.

Compilers using SSA form

[edit]

Open-source

[edit]
  • Mono uses SSA in its JIT compiler called Mini
  • WebKit uses SSA in its JIT compilers.[16][17]
  • Swift defines its own SSA form above LLVM IR, called SIL (Swift Intermediate Language).[18][19]
  • The Erlang compiler was rewritten in OTP 22.0 to "internally use an intermediate representation based on Static Single Assignment (SSA)", with plans for further optimizations built on top of SSA in future releases.[20]
  • The LLVM Compiler Infrastructure uses SSA form for all scalar register values (everything except memory) in its primary code representation. SSA form is only eliminated once register allocation occurs, late in the compile process (often at link time).
  • The GNU Compiler Collection (GCC) makes extensive use of SSA since version 4 (released in April 2005). The frontends generate "GENERIC" code that is then converted into "GIMPLE" code by the "gimplifier". High-level optimizations are then applied on the SSA form of "GIMPLE". The resulting optimized intermediate code is then translated into RTL, on which low-level optimizations are applied. The architecture-specific backends finally turn RTL into assembly language.
  • Go (1.7: for x86-64 architecture only; 1.8: for all supported architectures).[21][22]
  • IBM's open source adaptive Java virtual machine, Jikes RVM, uses extended Array SSA, an extension of SSA that allows analysis of scalars, arrays, and object fields in a unified framework. Extended Array SSA analysis is only enabled at the maximum optimization level, which is applied to the most frequently executed portions of code.
  • The Mozilla Firefox SpiderMonkey JavaScript engine uses SSA-based IR.[23]
  • The Chromium V8 JavaScript engine implements SSA in its Crankshaft compiler infrastructure as announced in December 2010
  • PyPy uses a linear SSA representation for traces in its JIT compiler.
  • The Android Runtime[24] and the Dalvik Virtual Machine use SSA.[25]
  • The Standard ML compiler MLton uses SSA in one of its intermediate languages.
  • LuaJIT makes heavy use of SSA-based optimizations.[26]
  • The PHP and Hack compiler HHVM uses SSA in its IR.[27]
  • The OCaml compiler uses SSA in its CMM IR (which stands for C--).[28]
  • libFirm, a library for use as the middle and back ends of a compiler, uses SSA form for all scalar register values until code generation by use of an SSA-aware register allocator.[29]
  • Various Mesa drivers via NIR, an SSA representation for shading languages.[30]

Commercial

[edit]

Research and abandoned

[edit]
  • The ETH Oberon-2 compiler was one of the first public projects to incorporate "GSA", a variant of SSA.
  • The Open64 compiler used SSA form in its global scalar optimizer, though the code is brought into SSA form before and taken out of SSA form afterwards. Open64 uses extensions to SSA form to represent memory in SSA form as well as scalar values.
  • In 2002, researchers modified IBM's JikesRVM (named Jalape?o at the time) to run both standard Java bytecode and a typesafe SSA (SafeTSA) bytecode class files, and demonstrated significant performance benefits to using the SSA bytecode.
  • jackcc is an open-source compiler for the academic instruction set Jackal 3.0. It uses a simple 3-operand code with SSA for its intermediate representation. As an interesting variant, it replaces Φ functions with a so-called SAME instruction, which instructs the register allocator to place the two live ranges into the same physical register.
  • The Illinois Concert Compiler circa 1994[36] used a variant of SSA called SSU (Static Single Use) which renames each variable when it is assigned a value, and in each conditional context in which that variable is used; essentially the static single information form mentioned above. The SSU form is documented in John Plevyak's Ph.D Thesis.
  • The COINS compiler uses SSA form optimizations as explained here.
  • Reservoir Labs' R-Stream compiler supports non-SSA (quad list), SSA and SSI (Static Single Information[37]) forms.[38]
  • Although not a compiler, the Boomerang decompiler uses SSA form in its internal representation. SSA is used to simplify expression propagation, identifying parameters and returns, preservation analysis, and more.
  • DotGNU Portable.NET used SSA in its JIT compiler.

References

[edit]

Notes

[edit]
  1. ^ Kelsey, Richard A. (1995). "A correspondence between continuation passing style and static single assignment form" (PDF). Papers from the 1995 ACM SIGPLAN workshop on Intermediate representations. pp. 13–22. doi:10.1145/202529.202532. ISBN 0897917545. S2CID 6207179.
  2. ^ Rastello & Tichadou 2022, sec. 1.4.
  3. ^ a b c Zadeck, Kenneth (April 2009). The Development of Static Single Assignment Form (PDF). Static Single-Assignment Form Seminar. Autrans, France.
  4. ^ Cytron, Ron; Lowry, Andy; Zadeck, F. Kenneth (1986). "Code motion of control structures in high-level languages". Proceedings of the 13th ACM SIGACT-SIGPLAN symposium on Principles of programming languages - POPL '86. pp. 70–85. doi:10.1145/512644.512651. S2CID 9099471.
  5. ^ Cytron, Ronald Kaplan; Ferrante, Jeanne. What's in a name? Or, the value of renaming for parallelism detection and storage allocation. International Conference on Parallel Processing, ICPP'87 1987. pp. 19–27.
  6. ^ Barry Rosen; Mark N. Wegman; F. Kenneth Zadeck (1988). "Global value numbers and redundant computations" (PDF). Proceedings of the 15th ACM SIGPLAN-SIGACT symposium on Principles of programming languages - POPL '88. pp. 12–27. doi:10.1145/73560.73562. ISBN 0-89791-252-7.
  7. ^ Alpern, B.; Wegman, M. N.; Zadeck, F. K. (1988). "Detecting equality of variables in programs". Proceedings of the 15th ACM SIGPLAN-SIGACT symposium on Principles of programming languages - POPL '88. pp. 1–11. doi:10.1145/73560.73561. ISBN 0897912527. S2CID 18384941.
  8. ^ Cytron, Ron; Ferrante, Jeanne; Rosen, Barry K.; Wegman, Mark N. & Zadeck, F. Kenneth (1991). "Efficiently computing static single assignment form and the control dependence graph" (PDF). ACM Transactions on Programming Languages and Systems. 13 (4): 451–490. CiteSeerX 10.1.1.100.6361. doi:10.1145/115372.115320. S2CID 13243943.
  9. ^ value range propagation
  10. ^ Cytron, Ron; Ferrante, Jeanne; Rosen, Barry K.; Wegman, Mark N.; Zadeck, F. Kenneth (1 October 1991). "Efficiently computing static single assignment form and the control dependence graph". ACM Transactions on Programming Languages and Systems. 13 (4): 451–490. doi:10.1145/115372.115320. S2CID 13243943.
  11. ^ Cooper, Keith D.; Harvey, Timothy J.; Kennedy, Ken (2001). A Simple, Fast Dominance Algorithm (PDF) (Technical report). Rice University, CS Technical Report 06-33870. Archived from the original (PDF) on 2025-08-05.
  12. ^ Briggs, Preston; Cooper, Keith D.; Harvey, Timothy J.; Simpson, L. Taylor (1998). Practical Improvements to the Construction and Destruction of Static Single Assignment Form (PDF) (Technical report). Archived from the original (PDF) on 2025-08-05.
  13. ^ "Block Arguments vs PHI nodes - MLIR Rationale". mlir.llvm.org. Retrieved 4 March 2022.
  14. ^ von Ronne, Jeffery; Ning Wang; Michael Franz (2004). "Interpreting programs in static single assignment form". Proceedings of the 2004 workshop on Interpreters, virtual machines and emulators - IVME '04. p. 23. doi:10.1145/1059579.1059585. ISBN 1581139098. S2CID 451410.
  15. ^ Boissinot, Benoit; Darte, Alain; Rastello, Fabrice; Dinechin, Beno?t Dupont de; Guillon, Christophe (2008). "Revisiting Out-of-SSA Translation for Correctness, Code Quality, and Efficiency". HAL-Inria Cs.DS: 14.
  16. ^ "Introducing the WebKit FTL JIT". 13 May 2014.
  17. ^ "Introducing the B3 JIT Compiler". 15 February 2016.
  18. ^ "Swift Intermediate Language (GitHub)". GitHub. 30 October 2021.
  19. ^ "Swift's High-Level IR: A Case Study of Complementing LLVM IR with Language-Specific Optimization, LLVM Developers Meetup 10/2015". YouTube. 9 November 2015. Archived from the original on 2025-08-05.
  20. ^ "OTP 22.0 Release Notes".
  21. ^ "Go 1.7 Release Notes - The Go Programming Language". golang.org. Retrieved 2025-08-05.
  22. ^ "Go 1.8 Release Notes - The Go Programming Language". golang.org. Retrieved 2025-08-05.
  23. ^ "IonMonkey Overview".,
  24. ^ The Evolution of ART - Google I/O 2016. Google. 25 May 2016. Event occurs at 3m47s.
  25. ^ Ramanan, Neeraja (12 Dec 2011). "JIT through the ages" (PDF).
  26. ^ "Bytecode Optimizations". the LuaJIT project.
  27. ^ "HipHop Intermediate Representation (HHIR)". GitHub. 30 October 2021.
  28. ^ Chambart, Pierre; Laviron, Vincent; Pinto, Dario (2025-08-05). "Behind the Scenes of the OCaml Optimising Compiler". OCaml Pro.
  29. ^ "Firm - Optimization and Machine Code Generation".
  30. ^ Ekstrand, Jason (16 December 2014). "Reintroducing NIR, a new IR for mesa".
  31. ^ "The Java HotSpot Performance Engine Architecture". Oracle Corporation.
  32. ^ "Introducing a new, advanced Visual C++ code optimizer". 4 May 2016.
  33. ^ "SPIR-V spec" (PDF).
  34. ^ Sarkar, V. (May 1997). "Automatic selection of high-order transformations in the IBM XL FORTRAN compilers" (PDF). IBM Journal of Research and Development. 41 (3). IBM: 233–264. doi:10.1147/rd.413.0233.
  35. ^ Chakrabarti, Gautam; Grover, Vinod; Aarts, Bastiaan; Kong, Xiangyun; Kudlur, Manjunath; Lin, Yuan; Marathe, Jaydeep; Murphy, Mike; Wang, Jian-Zhong (2012). "CUDA: Compiling and optimizing for a GPU platform". Procedia Computer Science. 9: 1910–1919. doi:10.1016/j.procs.2012.04.209.
  36. ^ "Illinois Concert Project". Archived from the original on 2025-08-05.
  37. ^ Ananian, C. Scott; Rinard, Martin (1999). Static Single Information Form (PDF) (Technical report). CiteSeerX 10.1.1.1.9976.
  38. ^ Encyclopedia of Parallel Computing.

General references

[edit]
[edit]
上海九院是什么医院 5月12日什么星座 吃什么升血小板快 结婚5年是什么婚 生活防水是什么意思
纳呆什么意思 怀孕时间从什么时候开始算 吃月饼是什么生肖 为什么会射精 什么拜之交
稀释是什么意思 心脏做什么检查最准确 变爻是什么意思 黄油可以做什么美食 口臭口苦什么原因引起的
龟头炎是什么症状 五行土克什么 金榜题名是什么生肖 周天是什么意思 风象星座是什么意思
newear是什么牌子hcv7jop5ns4r.cn 左眉毛上有痣代表什么hcv7jop9ns1r.cn 身体出汗是什么原因hcv8jop9ns6r.cn 小狗拉稀 吃什么药hcv8jop4ns7r.cn 悔教夫婿觅封侯是什么意思hcv8jop3ns2r.cn
花开半夏是什么意思hcv8jop3ns8r.cn 不以规矩下一句是什么hcv8jop8ns7r.cn 补锌吃什么hcv8jop8ns7r.cn 经常肚子疼拉肚子是什么原因hcv8jop5ns0r.cn 外阴长什么样hcv8jop5ns3r.cn
拔牙挂什么科hcv8jop6ns3r.cn 什么化妆品好用hcv8jop6ns9r.cn 公分是什么hcv8jop1ns5r.cn 咳嗽消炎药吃什么好hcv8jop5ns4r.cn 尿检粘液丝高什么意思gysmod.com
mini是什么车hcv8jop5ns1r.cn 帕金森是什么引起的hcv9jop5ns8r.cn 嘱托是什么意思hcv9jop8ns1r.cn 王代表什么生肖hcv8jop5ns2r.cn 孤枕难眠什么意思hcv8jop0ns1r.cn
百度