近视眼底改变什么意思| 通透是什么意思| 无脑儿是什么意思| 穿云箭是什么意思| 壮的偏旁叫什么名字| 婴儿吃手是什么原因| 音乐制作人是干什么的| 黑眼袋是什么原因引起的| cmc是什么| 7月份可以种什么菜| 空调多少匹什么意思| 浑什么意思| 11月5日是什么星座| 前列腺穿刺是什么意思| 龟是什么意思| 吃什么补充维生素b6| 乌云为什么是黑色的| 吃薄荷对人身体有什么好处| 心血管堵塞吃什么好| 艾蒿是什么| 为什么做梦会说梦话| 梦见手抓屎是什么意思| 胃痉挛吃什么药最有效| 96345是什么电话| 未土是什么土| 长期腹泻是什么原因| 御木本是什么档次| 社区医院属于什么级别| lof是什么意思| 什么情况下会得荨麻疹| 尿痛挂什么科| 蹦蹦跳跳是什么生肖| 贫血吃什么水果好| 什么食物热量高| pc什么意思| 5月8号是什么星座| 西席是什么意思| 韵五行属什么| 乐福鞋是什么鞋| 苏打是什么| 尿蛋白高不能吃什么食物| 朱元璋是什么朝代| 性激素六项什么时候查最准确| 风风火火是什么生肖| 耋是什么意思| 兔子为什么不吃窝边草| 治未病科是看什么病的| 腹膜转移是什么意思| 海澜之家是什么档次| 视而不见的意思是什么| 罹患是什么意思| 补办结婚证需要什么手续| 吃芒果对身体有什么好处| 多巴胺高是什么原因| hco3-是什么意思| 戒烟有什么好处| 血管变窄吃什么能改善| 十二月九号是什么星座| 浅表性胃炎是什么意思| 什么是偏光眼镜| 湿疹擦什么药| 枕头太低了有什么危害| 复视是什么意思| 非即食是什么意思| 纪念什么意思| 疱疹一般长在什么部位| 再接再厉什么意思| 傍晚是什么时辰| 现在有什么好的创业项目| 贪慕虚荣是什么意思| 蔗糖是什么糖| 脚趾甲变黑是什么原因| 此生不换什么意思| 低血压食补吃什么最快| 胃热吃什么药效果好| 迁移宫是什么意思| 为什么女追男没好下场| 中医考证需要什么条件| 吃无花果有什么好处和坏处| o是什么元素| 1989年什么生肖| 脑电图异常是什么病| 酝酿是什么意思| 艾灸治什么病| 喝茶有什么好处和坏处| 御姐范是什么意思| otc代表什么| 消化腺包括什么| 紫癜吃什么药| 老是犯困是什么原因| 什么是静脉曲张| 臃肿是什么意思| 香奈儿属于什么档次| 夏天为什么这么热| 小暑节气吃什么| 悸是什么意思| apc是什么牌子| 女性喝红茶有什么好处| 心什么| 掉头发去医院挂什么科| 养什么鱼招财转运| 潜伏是什么意思| 什么叫慢性萎缩性胃炎| 什么样的红点是白血病| 接吻是什么样的感觉| 漫山遍野是什么生肖| iwc是什么牌子手表| 河蚌吃什么| 焘是什么意思| 男人吃韭菜有什么好处| 舅舅的爸爸叫什么| 双顶径是指什么| 幽门螺杆菌挂什么科| 螺内酯片治什么病| 舌裂是什么原因造成的| 全麦面包是什么做的| 知柏地黄丸治什么病| 食管鳞状上皮增生是什么意思| 雅五行属什么| 胸上长痘痘是什么原因| 司空见惯的惯是什么意思| 奖励是什么意思| 喝什么酒不会胖| 二元酸是什么| 5月23日是什么星座| 泌尿科属于什么科| 广义是什么意思| 虫草花不能和什么一起吃| 自在什么意思| 事业编制是什么意思| 种牙好还是镶牙好区别是什么| 王妃是什么意思| gccg是什么牌子| 兔儿爷是什么意思| 脑梗什么原因导致的| 蚊子最怕什么东西| 猫呕吐是什么原因| 幽门螺旋杆菌吃什么药| 毛囊炎是什么样子| 脚冰凉是什么原因| 车厘子什么时候成熟| 什么是违反禁令标志指示| 小肚子胀疼是什么原因| 身上长白斑是什么原因造成的| 知青为什么要下乡| 数字7代表什么意思| 熬夜眼睛红血丝用什么眼药水| 小孩晚上磨牙是什么原因引起的| 酗酒什么意思| 月经没来吃什么药可以催月经来| 孕吐吃什么| 脂肪肝应注意什么| phoenix是什么牌子| 鸟屎掉手上有什么预兆| 德国高速为什么不限速| 角膜炎用什么眼药水| 小麦什么时候收割| 脸上为什么会长痣| 肠易激综合征吃什么中成药| 98年一月属什么生肖| 磷脂是什么东西| 压车是什么意思| 考试紧张吃什么药可缓解| 苹果跟什么榨汁好喝| 子宫肌瘤是什么病| 前列腺液是什么| 牙周炎吃什么消炎药| 什么面| 出色的什么| 艾叶泡水喝有什么功效| 纹眉失败擦什么淡化| 欲钱知吃月饼是什么生肖| 双肺纹理增粗是什么意思| 出柜什么意思| 六月二十四是什么日子| 泼皮是什么意思| 痰多吃什么化痰| 参芪颗粒适合什么人吃| 什么叫热射病| 黄晓明和杨颖什么时候结婚的| 汽车拉缸有什么现象| 法本是什么意思| 6月9日是什么星座| 忧愁是什么意思| alb是什么意思| 阴茎痒是什么原因| 汗蒸有什么好处和功效| barbour是什么牌子| 迂回什么意思| 煤油是什么油| 勃起功能障碍吃什么药| 月经量极少几乎没有是什么原因| 红色学士服是什么学位| 蒙蔽是什么意思| 番薯是什么意思| 夏季有什么水果| 凸起的痣是什么痣| 五粮液是什么香型的酒| 血液属于什么组织| 这个季节适合种什么蔬菜| 我的手机是什么型号| 醪糟是什么| 眉心中间有痣代表什么| 婴儿什么时候可以吃盐| 外阴瘙痒什么原因| 香兰素是什么东西| 纯粹是什么意思| 养尊处优的意思是什么| 红斑狼疮是什么病图片| 孩子手脚冰凉是什么原因| 大林木命忌讳什么颜色| 什么叫自闭症| 梦到头发白了是什么意思| 鱼皮是什么鱼的皮| 排骨炖山药有什么功效| 早上起来不晨勃是什么原因| 冬天吃什么水果| 付之一炬什么意思| 环孢素是什么药| 三千大千世界什么意思| 妇科千金片和三金片有什么区别| 摩西摩西是什么意思| 一个彭一个瓦念什么| 肤色暗黄适合穿什么颜色的衣服| 乌龟吃什么水果| 什么茶下火| 6月什么星座| 不值一提是什么意思| 腋下出汗有异味是什么原因| 什么克风| 健胃消食片什么时候吃最好| 一路长虹什么意思| 血小板聚集是什么意思| 吃生红枣有什么好处| 为什么会想吐| 浑身出汗是什么原因| 梦见捡鸡蛋是什么意思| 喜欢穿黑色衣服的女人是什么性格| 高血压三级是什么意思| 房速与房颤有什么区别| 男人耳后有痣代表什么| 毫无违和感什么意思| 口臭严重吃什么药好得快| 心率过缓有什么危害| 为什么脸上长痣越来越多| 左侧卵巢囊性包块是什么意思| 岁寒三友指什么| 下午五点半是什么时辰| 蛇盘疮吃什么药| 双向转诊是什么意思| 移花接木的意思是什么| 安利什么意思| 远视储备是什么意思| 什么病不能吃狗肉| 佝偻病什么症状| 斥巨资是什么意思| 过敏性咳嗽吃什么药| 八府巡按是什么官| 成长是什么| rhc血型阳性是什么意思| 镇委书记是什么级别| crp是什么检查项目| 脾气虚吃什么中成药| 吃什么东西能通便| 百度Jump to content

移动课堂沈阳站“2017年投资策略研讨论坛”圆满结束

From Wikipedia, the free encyclopedia
百度 去年8月,美国贸易代表办公室正式对我国启动301调查,主要针对与技术转让、知识产权和创新有关的法律政策或做法。

In computer science, reference counting is a programming technique of storing the number of references, pointers, or handles to a resource, such as an object, a block of memory, disk space, and others.

In garbage collection algorithms, reference counts may be used to deallocate objects that are no longer needed.

Advantages and disadvantages

[edit]

The main advantage of the reference counting over tracing garbage collection is that objects are reclaimed as soon as they can no longer be referenced, and in an incremental fashion, without long pauses for collection cycles and with clearly defined lifetime of every object. In real-time applications or systems with limited memory, this is important to maintain responsiveness. Reference counting is also among the simplest forms of memory management to implement. It also allows for effective management of non-memory resources such as operating system objects, which are often much scarcer than memory (tracing garbage collection systems use finalizers for this,[citation needed] but the delayed reclamation may cause problems). Weighted reference counts are a good solution for garbage collecting a distributed system.

Circular list example from a 1985 Master's Thesis.[1] Rectangles denote cons pairs, with reference counts. Even if the incoming top left pointer is removed, all counts remain >0.

Tracing garbage collection cycles are triggered too often if the set of live objects fills most of the available memory;[citation needed] it requires extra space to be efficient.[citation needed] Reference counting performance does not deteriorate as the total amount of free space decreases.[2]

Reference counts are also useful information to use as input to other runtime optimizations. For example, systems that depend heavily on immutable objects such as many functional programming languages can suffer an efficiency penalty due to frequent copies.[citation needed] However, if the compiler (or runtime system) knows that a particular object has only one reference (as most do in many systems), and that the reference is lost at the same time that a similar new object is created (as in the string append statement str ← str + "a"), it can replace the operation with a mutation on the original object.

Reference counting in naive form has three main disadvantages over the tracing garbage collection, both of which require additional mechanisms to ameliorate:

  • The frequent updates it involves are a source of inefficiency. While tracing garbage collectors can impact efficiency severely via context switching and cache line faults, they collect relatively infrequently, while accessing objects is done continually. Also, less importantly, reference counting requires every memory-managed object to reserve space for a reference count. In tracing garbage collectors, this information is stored implicitly in the references that refer to that object, saving space, although tracing garbage collectors, particularly incremental ones, can require additional space for other purposes.
  • The naive algorithm described above can't handle reference cycles, an object which refers directly or indirectly to itself. A mechanism relying purely on reference counts will never consider cyclic chains of objects for deletion, since their reference count is guaranteed to stay nonzero (cf. picture). Methods for dealing with this issue exist but can also increase the overhead and complexity of reference counting — on the other hand, these methods need only be applied to data that might form cycles, often a small subset of all data. One such method is the use of weak references, while another involves using a mark-sweep algorithm that gets called infrequently to clean up.
  • In a concurrent setting, all updates of the reference counts and all pointer modifications must be atomic operations, which incurs an additional cost. There are three reasons for the atomicity requirements. First, a reference count field may be updated by multiple threads, and so an adequate atomic instruction, such as a (costly) compare-and-swap, must be used to update the counts. Second, it must be clear which object loses a reference so that its reference count can be adequately decremented. But determining this object is non-trivial in a setting where multiple threads attempt to modify the same reference (i.e., when data races are possible). Finally, there exists a subtle race in which one thread gains a pointer to an object, but before it increments the object's reference count, all other references to this object are deleted concurrently by other threads and the object is reclaimed, causing the said thread to increment a reference count of a reclaimed object.

In addition to these, if the memory is allocated from a free list, reference counting suffers from poor locality. Reference counting alone cannot move objects to improve cache performance, so high performance collectors implement a tracing garbage collector as well. Most implementations (such as the ones in PHP and Objective-C) suffer from poor cache performance since they do not implement copying objects.[3]

Graph interpretation

[edit]

When dealing with garbage collection schemes, it is often helpful to think of the reference graph, which is a directed graph where the vertices are objects and there is an edge from an object A to an object B if A holds a reference to B. We also have a special vertex or vertices representing the local variables and references held by the runtime system, and no edges ever go to these nodes, although edges can go from them to other nodes.

In this context, the simple reference count of an object is the in-degree of its vertex. Deleting a vertex is like collecting an object. It can only be done when the vertex has no incoming edges, so it does not affect the out-degree of any other vertices, but it can affect the in-degree of other vertices, causing their corresponding objects to be collected as well if their in-degree also becomes 0 as a result.

The connected component containing the special vertex contains the objects that can't be collected, while other connected components of the graph only contain garbage. If a reference-counting garbage collection algorithm is implemented, then each of these garbage components must contain at least one cycle; otherwise, they would have been collected as soon as their reference count (i.e., the number of incoming edges) dropped to zero.

Dealing with inefficiency of updates

[edit]

Incrementing and decrementing reference counts every time a reference is created or destroyed can significantly impede performance. Not only do the operations take time, but they damage cache performance and can lead to pipeline bubbles. Even read-only operations like calculating the length of a list require a large number of reads and writes for reference updates with naive reference counting.

One simple technique is for the compiler to combine a number of nearby reference updates into one. This is especially effective for references which are created and quickly destroyed. Care must be taken, however, to put the combined update at the right position so that a premature free can be avoided.

The Deutsch-Bobrow method of reference counting capitalizes on the fact that most reference count updates are in fact generated by references stored in local variables. It ignores these references, only counting references in data structures, but before an object with reference count zero can be deleted, the system must verify with a scan of the stack and registers that no other reference to it still exists.

Another technique devised by Henry Baker involves deferred increments,[4] in which references which are stored in local variables do not immediately increment the corresponding reference count, but instead defer this until it is necessary. If such a reference is destroyed quickly, then there is no need to update the counter. This eliminates a large number of updates associated with short-lived references (such as the above list-length-counting example). However, if such a reference is copied into a data structure, then the deferred increment must be performed at that time. It is also critical to perform the deferred increment before the object's count drops to zero, to avoid a premature free.

A dramatic decrease in the overhead on counter updates was obtained by Levanoni and Petrank.[5][6] They introduce the update coalescing method which coalesces many of the redundant reference count updates. Consider a pointer that in a given interval of the execution is updated several times. It first points to an object O1, then to an object O2, and so forth until at the end of the interval it points to some object On. A reference counting algorithm would typically execute rc(O1)--, rc(O2)++, rc(O2)--, rc(O3)++, rc(O3)--, ..., rc(On)++. But most of these updates are redundant. In order to have the reference count properly evaluated at the end of the interval it is enough to perform rc(O1)-- and rc(On)++. The rest of the updates are redundant.

Levanoni and Petrank showed in 2001 how to use such update coalescing in a reference counting collector. When using update coalescing with an appropriate treatment of new objects, more than 99% of the counter updates are eliminated for typical Java benchmarks.

Interestingly, update coalescing also eliminates the need to employ atomic operations during pointer updates in a concurrent setting, this solving reference counting issues in a concurrent setting. Therefore, update coalescing solves the third problem of naive reference counting (i.e., a costly overhead in a concurrent setting). Levanoni and Petrank presented an enhanced algorithm that may run concurrently with multithreaded applications employing only fine synchronization.[7]

Blackburn and McKinley's ulterior reference counting method in 2003[8] combines deferred reference counting with a copying nursery, observing that the majority of pointer mutations occur in young objects. This algorithm achieves throughput comparable with the fastest generational copying collectors with the low bounded pause times of reference counting.

Dealing with reference cycles

[edit]

Perhaps the most obvious way to handle reference cycles is to design the system to avoid creating them. A system may explicitly forbid reference cycles; file systems with hard links often do this. Judicious use of "weak" (non-counted) references may also help avoid retain cycles; the Cocoa framework, for instance, recommends using "strong" references for parent-to-child relationships and "weak" references for child-to-parent relationships.[9]

Systems may also be designed to tolerate or correct the cycles they create in some way. Developers may design code to explicitly "tear down" the references in a data structure when it is no longer needed, though this has the cost of requiring them to manually track that data structure's lifetime. This technique can be automated by creating an "owner" object that does the tearing-down when it is destroyed; for instance, a Graph object's destructor could delete the edges of its GraphNodes, breaking the reference cycles in the graph. Cycles may even be ignored in systems with short lives and a small amount of cyclic garbage, particularly when the system was developed using a methodology of avoiding cyclic data structures wherever possible, typically at the expense of efficiency.

Computer scientists have also discovered ways to detect and collect reference cycles automatically, without requiring changes in the data structure design. One simple solution is to periodically use a tracing garbage collector to reclaim cycles; since cycles typically constitute a relatively small amount of reclaimed space, the collector can be run much less often than with an ordinary tracing garbage collector.

Bacon describes a cycle-collection algorithm for reference counting with similarities to tracing collectors, including the same theoretical time bounds. It is based on the observation that a cycle can only be isolated when a reference count is decremented to a nonzero value. All objects which this occurs on are put on a roots list, and then periodically the program searches through the objects reachable from the roots for cycles. It knows it has found a cycle that can be collected when decrementing all the reference counts on a cycle of references brings them all down to zero.[10] An enhanced version of this algorithm by Paz et al.[11] is able to run concurrently with other operations and improve its efficiency by using the update coalescing method of Levanoni and Petrank.[5][6]

Variant forms

[edit]

Although it is possible to augment simple reference counts in a variety of ways, often a better solution can be found by performing reference counting in a fundamentally different way. Here we describe some of the variants on reference counting and their benefits and drawbacks.

Weighted reference counting

[edit]

In weighted reference counting, each reference is assigned a weight, and each object tracks not the number of references referring to it, but the total weight of the references referring to it. The initial reference to a newly created object has a large weight, such as 216. Whenever this reference is copied, half of the weight goes to the new reference, and half of the weight stays with the old reference. Since the total weight does not change, the object's reference count does not need to be updated.

Destroying a reference decrements the total weight by the weight of that reference. When the total weight becomes zero, all references have been destroyed. If an attempt is made to copy a reference with a weight of 1, the reference has to "get more weight" by adding to the total weight and then adding this new weight to the reference, and then splitting it. An alternative in this situation is to create an indirection reference object, the initial reference to which is created with a large weight which can then be split.

The property of not needing to access a reference count when a reference is copied is particularly helpful when the object's reference count is expensive to access, for example because it is in another process, on disk, or even across a network. It can also help increase concurrency by avoiding many threads locking a reference count to increase it. Thus, weighted reference counting is most useful in parallel, multiprocess, database, or distributed applications.

The primary problem with simple weighted reference counting is that destroying a reference still requires accessing the reference count, and if many references are destroyed, this can cause the same bottlenecks we seek to avoid. Some adaptations of weighted reference counting seek to avoid this by transferring weight from a dying reference to an active reference.

Weighted reference counting was independently devised by Bevan[12] and Watson & Watson[13] in 1987.

Indirect reference counting

[edit]

In indirect reference counting, it is necessary to keep track of the reference's source. This means that two references are kept to the object: a direct one which is used for invocations; and an indirect one which forms part of a diffusion tree, such as in the Dijkstra–Scholten algorithm, which allows a garbage collector to identify dead objects. This approach prevents an object from being discarded prematurely.

Examples of use

[edit]

Garbage collection

[edit]

As a collection algorithm, reference counting tracks, for each object, a count of the number of references to it held by other objects. If an object's reference count reaches zero, the object has become inaccessible, and can be destroyed.

When an object is destroyed, any objects referenced by that object also have their reference counts decreased. Because of this, removing a single reference can potentially lead to a large number of objects being freed. A common modification allows reference counting to be made incremental: instead of destroying an object as soon as its reference count becomes zero, it is added to a list of unreferenced objects, and periodically (or as needed) one or more items from this list are destroyed.

Simple reference counts require frequent updates. Whenever a reference is destroyed or overwritten, the reference count of the object it references is decremented, and whenever one is created or copied, the reference count of the object it references is incremented.

Reference counting is also used in file systems and distributed systems, where full non-incremental tracing garbage collection is too time-consuming because of the size of the object graph and slow access speed.[14]

Component Object Model

[edit]

Microsoft's Component Object Model (COM) and WinRT makes pervasive use of reference counting. In fact, two of the three methods that all COM objects must provide (in the IUnknown interface) increment or decrement the reference count. Much of the Windows Shell and many Windows applications (including MS Internet Explorer, MS Office, and countless third-party products) are built on COM, demonstrating the viability of reference counting in large-scale systems.[citation needed]

One primary motivation for reference counting in COM is to enable interoperability across different programming languages and runtime systems. A client need only know how to invoke object methods in order to manage object life cycle; thus, the client is completely abstracted from whatever memory allocator the implementation of the COM object uses. As a typical example, a Visual Basic program using a COM object is agnostic towards whether that object was allocated (and must later be deallocated) by a C++ allocator or another Visual Basic component.

C++

[edit]

C++ does not perform reference-counting by default, fulfilling its philosophy of not adding functionality that might incur overheads where the user has not explicitly requested it. Objects that are shared but not owned can be accessed via a reference, raw pointer, or iterator (a conceptual generalisation of pointers).

However, by the same token, C++ provides native ways for users to opt-into such functionality: C++11 provides reference counted smart pointers, via the std::shared_ptr class, enabling automatic shared memory-management of dynamically allocated objects. Programmers can use this in conjunction with weak pointers (via std::weak_ptr) to break cyclic dependencies. Objects that are dynamically allocated but not intended to be shared can have their lifetime automatically managed using a std::unique_ptr.

In addition, C++11's move semantics further reduce the extent to which reference counts need to be modified by removing the deep copy normally used when a function returns an object, as it allows for a simple copy of the pointer of said object.

Cocoa (Objective-C)

[edit]

Apple's Cocoa and Cocoa Touch frameworks (and related frameworks, such as Core Foundation) use manual reference counting, much like COM. Traditionally this was accomplished by the programmer manually sending retain and release messages to objects, but Automatic Reference Counting, a Clang compiler feature that automatically inserts these messages as needed, was added in iOS 5[15] and Mac OS X 10.7.[16] Mac OS X 10.5 introduced a tracing garbage collector as an alternative to reference counting, but it was deprecated in OS X 10.8 and removed from the Objective-C runtime library in macOS Sierra.[17][18] iOS has never supported a tracing garbage collector.

Delphi

[edit]

Delphi is mostly not a garbage collected language, in that user-defined types must still be manually allocated and deallocated; however, it does provide automatic collection using reference counting for a few built-in types, such as strings, dynamic arrays, and interfaces, for ease of use and to simplify the generic database functionality. It is up to the programmer to decide whether to use the built-in types; Delphi programmers have complete access to low-level memory management like in C/C++. So all potential cost of Delphi's reference counting can, if desired, be easily circumvented.

Some of the reasons reference counting may have been preferred to other forms of garbage collection in Delphi include:

  • The general benefits of reference counting, such as prompt collection.
  • Cycles either cannot occur or do not occur in practice because none of the garbage-collected built-in types are recursive. (using interfaces one could create such scenario, but that is not common usage)
  • The overhead in code size required for reference counting is very small (on native x86, typically a single LOCK INC, LOCK DEC or LOCK XADD instruction, which ensures atomicity in any environment), and no separate thread of control is needed for collection as would be needed for a tracing garbage collector.
  • Many instances of the most commonly used garbage-collected type, the string, have a short lifetime, since they are typically intermediate values in string manipulation. A lot of local string usage could be optimized away, but the compiler currently doesn't do it.
  • The reference count of a string is checked before mutating a string. This allows reference count 1 strings to be mutated directly whilst higher reference count strings are copied before mutation. This allows the general behaviour of old style pascal strings to be preserved whilst eliminating the cost of copying the string on every assignment.
  • Because garbage-collection is only done on built-in types, reference counting can be efficiently integrated into the library routines used to manipulate each datatype, keeping the overhead needed for updating of reference counts low. Moreover, a lot of the runtime library is in hand-optimized assembler.
  • The string type can be cast to a pointer to char, and high performance operations can be performed that way. This is important since both Delphi and FPC implement their RTL in Pascal. Various other automated types have such casting options.

GObject

[edit]

The GObject object-oriented programming framework implements reference counting on its base types, including weak references. Reference incrementing and decrementing uses atomic operations for thread safety. A significant amount of the work in writing bindings to GObject from high-level languages lies in adapting GObject reference counting to work with the language's own memory management system.

The Vala programming language uses GObject reference counting as its primary garbage collection system, along with copy-heavy string handling.[19]

Perl

[edit]

Perl also uses reference counting, without any special handling of circular references, although (as in Cocoa and C++ above), Perl does support weak references, which allows programmers to avoid creating a cycle.

PHP

[edit]

PHP uses a reference counting mechanism for its internal variable management.[20] Since PHP 5.3, it implements the algorithm from Bacon's above mentioned paper. PHP allows you to turn on and off the cycle collection with user-level functions. It also allows you to manually force the purging mechanism to be run.

Python

[edit]

Python also uses reference counting and offers cycle detection as well (and can reclaim reference cycles).[21]

Rust

[edit]

Like other low-level languages, Rust does not provide reference counting by default. Instead, any constructed type will be dropped when it falls out of scope. When a programmer needs to define the scope of a constructed type, they often use lifetimes.

However, the language also offers various alternatives to complex forms of memory management. Reference counting functionality is provided by the Rc and Arc types, which are non-atomic and atomic respectively.

For example, the type Rc<T> provides shared ownership of a value of type T, allocated on the heap for multiple references to its data.[22]

use std::rc::Rc;

struct Cat {
    color: String,
}

fn main() {
    let cat = Cat { color: "black".to_string() };
    let cat = Rc::new(cat);
}

Using these constructs allows programmers to avoid lifetimes for a small runtime cost. Both reference counters keep track of the number of owners, as they must drop themselves when no owners remain.

One noteworthy facet of these types is related to their usage as a shared reference. In Rust, shared references cannot mutate their held data, so Rc often comes bundled with Cell, and Arc with Mutex, in contexts where interior mutability is necessary.

Interior mutability without UnsafeCell has performance costs, too, so, for maximum performance, some applications may call for additional complexity.[23]

Squirrel

[edit]

Squirrel uses reference counting with cycle detection. This tiny language is relatively unknown outside the video game industry; however, it is a concrete example of how reference counting can be practical and efficient (especially in realtime environments).[citation needed]

Swift

[edit]

Swift uses reference counting to track and manage the memory of class instances, and provides the weak keyword for creating weak references. Instances of value types do not use reference counting.[24]

Tcl

[edit]

Tcl 8 uses reference counting for memory management of values (Tcl Obj structs). Since Tcl's values are immutable, reference cycles are impossible to form and no cycle detection scheme is needed. Operations that would replace a value with a modified copy are generally optimized to instead modify the original when its reference count indicates that it is not shared. The references are counted at a data structure level, so the problems with very frequent updates discussed above do not arise.

Xojo

[edit]

Xojo also uses reference counting, without any special handling of circular references, although (as in Cocoa and C++ above), Xojo does support weak references, which allows programmers to avoid creating a cycle.

File systems

[edit]

Many file systems maintain reference counts to any particular block or file, for example the inode link count on Unix-style file systems, which are usually known as hard links. When the count reaches zero, the file can be safely deallocated. While references can still be made from directories, some Unixes only allow references from live processes, and there can be files that exist outside the file system hierarchy.

References

[edit]
  1. ^ Kevin G. Cassidy (December 1985). The Feasibility of Automatic Storage Reclamation with Concurrent Program Execution in a LISP Environment (PDF) (Master's thesis). Naval Postgraduate School, Monterey/CA. Here: p.25
  2. ^ Wilson, Paul R. (1992). "Uniprocessor Garbage Collection Techniques". Proceedings of the International Workshop on Memory Management. London, UK: Springer-Verlag. pp. 1–42. ISBN 3-540-55940-X. Section 2.1.
  3. ^ Rifat Shahriyar, Stephen M. Blackburn, Xi Yang and Kathryn S. McKinley (2013). "Taking Off the Gloves with Reference Counting Immix" (PDF). 24th ACM SIGPLAN conference on Object Oriented Programming Systems, Languages and Applications. OOPSLA 2013. doi:10.1145/2509136.2509527.{{cite conference}}: CS1 maint: multiple names: authors list (link)
  4. ^ Henry Baker (September 1994). "Minimizing Reference Count Updating with Deferred and Anchored Pointers for Functional Data Structures". ACM SIGPLAN Notices. 29 (9): 38–43. CiteSeerX 10.1.1.25.955. doi:10.1145/185009.185016. S2CID 14448488.
  5. ^ a b Yossi Levanoni, Erez Petrank (2001). "An on-the-fly reference-counting garbage collector for java". Proceedings of the 16th ACM SIGPLAN conference on Object-oriented programming, systems, languages, and applications. OOPSLA 2001. pp. 367–380. doi:10.1145/504282.504309.
  6. ^ a b Yossi Levanoni, Erez Petrank (2006). "An on-the-fly reference-counting garbage collector for java". ACM Trans. Program. Lang. Syst. 28: 31–69. CiteSeerX 10.1.1.15.9106. doi:10.1145/1111596.1111597. S2CID 14777709.
  7. ^ "An On-the-Fly Reference-Counting Garbage Collector for Java" (PDF). Cs.technion.ac.il. Retrieved 24 June 2017.
  8. ^ Stephen Blackburn; Kathryn McKinley (2003). "Ulterior Reference Counting: Fast Garbage Collection without a Long Wait" (PDF). Proceedings of the 18th annual ACM SIGPLAN conference on Object-oriented programing, systems, languages, and applications. OOPSLA 2003. pp. 344–358. doi:10.1145/949305.949336. ISBN 1-58113-712-5.
  9. ^ "Mac Developer Library". Developer.apple.com. Retrieved 17 December 2015.
  10. ^ Bacon, David F.; Rajan, V. T. (2001). "Concurrent Cycle Collection in Reference Counted Systems" (PDF). ECOOP 2001 — Object-Oriented Programming. Lecture Notes in Computer Science. Vol. 2072. pp. 207–235. doi:10.1007/3-540-45337-7_12. ISBN 978-3-540-42206-8. Archived from the original (PDF) on 23 July 2004.
  11. ^ Harel Paz, David F. Bacon, Elliot K. Kolodner, Erez Petrank, V. T. Rajan (2007). "An efficient on-the-fly cycle collection". ACM Transactions on Programming Languages and Systems. 29 (4): 20–es. CiteSeerX 10.1.1.10.2777. doi:10.1145/1255450.1255453. S2CID 4550008.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  12. ^ Bevan, D. I. (1987). "Distributed garbage collection using reference counting". Volume II: Parallel Languages on PARLE: Parallel Architectures and Languages Europe. Eindhoven, The Netherlands: Springer-Verlag. pp. 176–187. ISBN 0-387-17945-3.
  13. ^ Watson, Paul; Watson, Ian (1987). "An efficient garbage collection scheme for parallel computer architectures". Volume II: Parallel Languages on PARLE: Parallel Architectures and Languages Europe. Eindhoven, The Netherlands: Springer-Verlag. pp. 432–443. ISBN 0-387-17945-3.
  14. ^ Bruno, Rodrigo; Ferreira, Paulo (2018). "A Study on Garbage Collection Algorithms for Big Data Environments". ACM Computing Surveys. 51: 1–35. doi:10.1145/3156818. S2CID 21388487.
  15. ^ [1] Archived 9 June 2011 at the Wayback Machine
  16. ^ "Mac Developer Library". Developer.apple.com. Retrieved 17 December 2015.
  17. ^ Siracusa, John (25 July 2012). "OS X 10.8 Mountain Lion: the Ars Technica review". Ars Technica. At section "Objective-C enhancements". Retrieved 17 November 2016.
  18. ^ "Xcode 8 Release Notes". Apple Developer. 27 October 2016. Archived from the original on 19 March 2017. Retrieved 19 March 2017.
  19. ^ "Projects/Vala/ReferenceHandling - GNOME Wiki!". GNOME. 25 May 2015. Retrieved 17 December 2015.
  20. ^ "PHP: Reference Counting Basics - Manual". www.php.net. Retrieved 1 October 2020.
  21. ^ "1. Extending Python with C or C++ — Python 2.7.11 documentation". Docs.python.org. 5 December 2015. Retrieved 17 December 2015.
  22. ^ "std::rc - Rust". doc.rust-lang.org. Retrieved 2 November 2020.
  23. ^ "The Rust Reference". 21 July 2022. Interior Mutability. Archived from the original on 24 March 2024. Retrieved 22 April 2024.
  24. ^ "Documentation". docs.swift.org. Retrieved 6 December 2023.
[edit]
15点是什么时辰 做喉镜挂什么科 11.22什么星座 227是什么意思 植物的根有什么作用
汉朝后面是什么朝代 女人胸疼是什么原因 属鼠的和什么属相最配 麦粒肿吃什么消炎药 心绞痛是什么原因引起的
火烈鸟为什么是红色的 梦见别人家拆房子是什么预兆 什么饮料解酒 宝宝便秘吃什么食物好 肛门不舒服是什么原因
12月11日什么星座 渐冻症是什么 说三道四的意思是什么 女子与小人难养也什么意思 党参和丹参有什么区别
月经不来是什么原因导致的hcv8jop2ns2r.cn 大白菜什么时候种inbungee.com 小鸟来家里有什么预兆jingluanji.com 什么水果降火效果最好hcv7jop9ns8r.cn 甲硝唑治什么hcv8jop4ns4r.cn
截根疗法是什么hcv8jop9ns4r.cn 厦门房价为什么那么高hcv8jop3ns9r.cn 眩晕症是什么原因引起的hcv8jop3ns8r.cn 梦见下雨是什么征兆bjcbxg.com 开塞露有什么功效hcv8jop8ns0r.cn
jbp什么意思hcv7jop9ns8r.cn 药物过敏用什么药hcv8jop0ns2r.cn 什么是随机血糖hcv9jop0ns7r.cn 胎儿胆囊偏小有什么影响hcv8jop5ns8r.cn who医学上是什么意思hcv9jop0ns0r.cn
日行一善下一句是什么hcv8jop2ns9r.cn 超声科检查什么hcv8jop6ns2r.cn 靥是什么意思hcv8jop9ns6r.cn 什么星hcv9jop6ns5r.cn 懿读什么hcv9jop1ns0r.cn
百度