孔子是什么时期的人| 舌苔发黄是什么病| neighborhood是什么意思| 直肠炎吃什么药效果好| 称心如意是什么意思| 树欲静而风不止什么意思| lagogo是什么牌子| 天秤女喜欢什么样的男生| 为什么嘴巴老是干| 梦见抽血是什么预兆| 耳机戴久了有什么危害| 米虫是什么意思| 移动电源和充电宝有什么区别| 甲状腺是什么原因引起的| 育婴师是干什么的| 梦见小狗是什么意思| 什么是煞气| mv是什么单位| 什么是阳虚| 尘肺病用什么药最好| 正桃花是什么意思| 双肺纹理增多模糊是什么意思| 心肌梗塞吃什么药| 家里为什么有蜈蚣| 什么吃草吞吞吐吐歇后语| 胃痛吃什么药好| 激光是什么| 身份证拍照穿什么衣服| 反驳是什么意思| 痔疮是什么原因引起的| 什么是玫瑰糠疹| 手指关节疼痛吃什么药| 肝实质弥漫性回声改变什么意思| 氟骨症是什么病| 飒爽什么意思| 臆想症是什么| 四联用药是些什么药| 孔子是什么圣人| 去医院打耳洞挂什么科| 眼睛干涩是什么原因引起的| 罗字五行属什么| 尉迟恭是什么生肖| 抽筋缺什么| 溏是什么意思| 玄关是什么位置| a股是什么| 孕激素六项检查什么时候做| 肠胃胀气是什么原因| 蟋蟀吃什么| 孩子注意力不集中缺什么微量元素| 血液由什么组成| 臭虫是什么| 奶油色是什么颜色| 早晨起床口苦是什么原因| 一什么蔷薇| 肾钙化灶是什么意思| 吃什么可以补气血| 老年人吃什么水果对身体好| 梦见自己和别人吵架是什么意思| 十一月二十六是什么星座| 爱长闭口用什么护肤品| 心率失常是什么意思| 塌腰是什么意思| 骨龄大于年龄意味着什么| 什么叫血栓| 胃肠镜检查挂什么科| 梦见抓龙虾是什么意思| 一个小时尿一次是什么原因| 卵巢囊肿是什么引起的| 狗狗哭了代表什么预兆| 送老人什么礼物最好| 667什么意思| 腰椎间盘突出适合什么运动| 什么时候开始数伏| 气血亏虚吃什么中成药| 大姨妈推迟是什么原因| 甲状腺偏高有什么影响| 什么是集成灶| 梦见自己的衣服丢了是什么意思| 毛戈平化妆品什么档次| 人事是做什么的| 雅典娜是什么神| 感性的人是什么意思| 紫烟是什么意思| 农历什么意思| 根基是什么意思| 汪星人什么意思| 血糖高的病人吃什么| 外甥女是什么关系| 荨麻疹擦什么药膏| 天麻种植需要什么条件| 副处长是什么级别| 肿物是什么意思| 一九六八年属什么生肖| ceremony是什么意思| 喉咙挂什么科室| 渗透压偏高是什么原因| 火鸡面为什么叫火鸡面| 12月初是什么星座| 鸽子红鼻头喂什么药| 很困但是睡不着是什么原因| 人为什么会焦虑| 口引念什么| 字母哥什么位置| 为什么会有脚气| 小孩子上户口需要什么证件| 很什么很什么| ifound是什么牌子| 柯什么意思| m2是什么意思| 牙龈肿痛吃什么药好得快| 淀粉样变性是什么病| ct是什么| 绝经三年了突然又出血了什么原因| 胃酸吃什么药效果最好| who是什么意思| 孩子老是流鼻血是什么原因| 早上起来腰疼是什么原因| 一什么蔷薇| 十一月份是什么星座| 花甲吃什么| 吃什么减肥效果最好最快| 发髻是什么意思| 风象星座是什么意思| 高压150低压100吃什么药| 芥末是用什么做的| 大户人家什么意思| 骤雨落宿命敲什么意思| 盆腔炎做什么检查能查出来| 血糖偏高吃什么水果好| 三刀六洞什么意思| 金不换是什么菜| 女性痔疮挂什么科室| 38是什么意思| 水车是什么意思| 什么品牌的母婴用品好| 风湿挂什么科| 光明磊落是什么生肖| 脐下三寸是什么地方| 怀孕脉象是什么样子| 禁欲系是什么意思| 接吻是什么样的感觉| 孕囊形态欠规则是什么意思| 嗓子干疼吃什么药| 5.6是什么星座| 画蛇添足的故事告诉我们什么道理| 支原体衣原体是什么病| 维字五行属什么| 奇花初胎矞矞皇皇是什么意思| 肉桂茶是什么茶| 砗磲是什么| 速战速决的意思是什么| 曼秀雷敦属于什么档次| 审时度势是什么意思| 美团和美团外卖有什么区别| 不期而遇什么意思| 吃中药不可以吃什么水果| 水瓶座的幸运色是什么颜色| 韩后属于什么档次| 前是什么偏旁| 小腹胀痛是什么原因| 孕期便秘吃什么通便快| 右眼一直跳什么情况| 东窗事发是什么意思| 阅字五行属什么| 烤肉筋的肉是什么肉| 福布斯是什么意思| 便秘和腹泻交替出现是什么意思| 枸杞是补什么的| 痔疮的症状有些什么| 五个月宝宝吃什么辅食最好| 子宫粘连是什么原因引起的| 姜虫咬人有什么症状| 8月29日什么星座| 补钾用什么药| 羊得布病什么症状| 梦到自己怀孕了是什么预兆| 右肺结节是什么意思| 维生素c阴性什么意思| 15度穿什么衣服| 夏天为什么热| 副营长是什么军衔| 微博id是什么| 秦时明月什么时候更新| 吃什么补肾最快最好| 早上起床牙龈出血是什么原因| 怀孕前有什么症状| 福州五行属什么| 生肖鼠和什么生肖最配| 紫色是什么颜色| 初级会计什么时候拿证| exp是什么函数| 荞麦是什么| 板蓝根长什么样| 1938年属什么生肖| k14是什么金| 阴道口出血是什么原因| 血红蛋白偏低是什么意思| hbeab阳性是什么意思| 京ag6是什么意思| 4ever是什么意思| 吃韭菜有什么好处和坏处| 候车是什么意思| 叉烧是什么意思| 吃葡萄有什么好处| 什么是糖化血红蛋白| 早晨起床手麻是什么原因| 生吃紫苏叶有什么功效| 指导员是什么级别| 什么是透析治疗| 头晕恶心挂什么科| 小孩不吃饭是什么原因| 夏天爱出汗是什么原因| 异国他乡的意思是什么| 组织部是干什么的| 空调吹感冒吃什么药| 什么是兼职| 肠道感染是什么原因引起的| 淋巴细胞计数偏高是什么原因| 拉绿色大便是什么原因| 吹气检查胃是检查什么| 什么叫焦虑症| 什么叫闰年| 胃疼吐酸水是什么原因| 骨盐量偏低代表什么| 为什么会做梦| 什么是cnc| 沵是什么意思| 夏天有什么花| 后装治疗是什么意思| 低血糖中医叫什么病| 舌苔黄是什么原因引起的| 外痔用什么药可以消除| 血浆蛋白是什么| 狗狗细小是什么症状| 海棠果什么时候成熟| 规则是什么意思| 嫁妆是什么意思| 肺结核挂什么科| 外阴白斑用什么药最好| 歹人是什么意思| 匮乏是什么意思| 人放屁多是什么原因| 受之无愧的意思是什么| 九眼天珠适合什么人戴| 经常犯困想睡觉是什么原因| 梦见给死人烧纸钱是什么意思| 促甲状腺激素低是什么原因| dior是什么牌子| 女人为什么要少吃鳝鱼| 什么大什么粗| 为什么白天能看到月亮| 射手座跟什么星座最配| 小孩反复发烧是什么原因| 牙疼喝什么药| 梦见下雨是什么征兆| 左肺上叶钙化灶什么意思| 喝啤酒尿多是什么原因| 白发多吃什么可以改善| 三十六计第一计是什么| 今年为什么闰六月| 718什么星座| 为什么结婚| 吃六味地黄丸有什么好处| 百度Jump to content

购置科技体验中心综合布线等设备项目中标公告

From Wikipedia, the free encyclopedia
百度 国共合作全面破裂后,和贺龙、叶挺、朱德、刘伯承等一起于8月1日在江西南昌领导武装起义,任中共前敌委员会书记。

Nearest neighbor search (NNS), as a form of proximity search, is the optimization problem of finding the point in a given set that is closest (or most similar) to a given point. Closeness is typically expressed in terms of a dissimilarity function: the less similar the objects, the larger the function values.

Formally, the nearest-neighbor (NN) search problem is defined as follows: given a set S of points in a space M and a query point q ∈ M, find the closest point in S to q. Donald Knuth in vol. 3 of The Art of Computer Programming (1973) called it the post-office problem, referring to an application of assigning to a residence the nearest post office. A direct generalization of this problem is a k-NN search, where we need to find the k closest points.

Most commonly M is a metric space and dissimilarity is expressed as a distance metric, which is symmetric and satisfies the triangle inequality. Even more common, M is taken to be the d-dimensional vector space where dissimilarity is measured using the Euclidean distance, Manhattan distance or other distance metric. However, the dissimilarity function can be arbitrary. One example is asymmetric Bregman divergence, for which the triangle inequality does not hold.[1]

Applications

[edit]

The nearest neighbor search problem arises in numerous fields of application, including:

Methods

[edit]

Various solutions to the NNS problem have been proposed. The quality and usefulness of the algorithms are determined by the time complexity of queries as well as the space complexity of any search data structures that must be maintained. The informal observation usually referred to as the curse of dimensionality states that there is no general-purpose exact solution for NNS in high-dimensional Euclidean space using polynomial preprocessing and polylogarithmic search time.

Exact methods

[edit]
[edit]

The simplest solution to the NNS problem is to compute the distance from the query point to every other point in the database, keeping track of the "best so far". This algorithm, sometimes referred to as the naive approach, has a running time of O(dN), where N is the cardinality of S and d is the dimensionality of S. There are no search data structures to maintain, so the linear search has no space complexity beyond the storage of the database. Naive search can, on average, outperform space partitioning approaches on higher dimensional spaces.[5]

The absolute distance is not required for distance comparison, only the relative distance. In geometric coordinate systems the distance calculation can be sped up considerably by omitting the square root calculation from the distance calculation between two coordinates. The distance comparison will still yield identical results.

Space partitioning

[edit]

Since the 1970s, the branch and bound methodology has been applied to the problem. In the case of Euclidean space, this approach encompasses spatial index or spatial access methods. Several space-partitioning methods have been developed for solving the NNS problem. Perhaps the simplest is the k-d tree, which iteratively bisects the search space into two regions containing half of the points of the parent region. Queries are performed via traversal of the tree from the root to a leaf by evaluating the query point at each split. Depending on the distance specified in the query, neighboring branches that might contain hits may also need to be evaluated. For constant dimension query time, average complexity is O(log N)[6] in the case of randomly distributed points, worst case complexity is O(kN^(1-1/k))[7] Alternatively the R-tree data structure was designed to support nearest neighbor search in dynamic context, as it has efficient algorithms for insertions and deletions such as the R* tree.[8] R-trees can yield nearest neighbors not only for Euclidean distance, but can also be used with other distances.

In the case of general metric space, the branch-and-bound approach is known as the metric tree approach. Particular examples include vp-tree and BK-tree methods.

Using a set of points taken from a 3-dimensional space and put into a BSP tree, and given a query point taken from the same space, a possible solution to the problem of finding the nearest point-cloud point to the query point is given in the following description of an algorithm.

(Strictly speaking, no such point may exist, because it may not be unique. But in practice, usually we only care about finding any one of the subset of all point-cloud points that exist at the shortest distance to a given query point.) The idea is, for each branching of the tree, guess that the closest point in the cloud resides in the half-space containing the query point. This may not be the case, but it is a good heuristic. After having recursively gone through all the trouble of solving the problem for the guessed half-space, now compare the distance returned by this result with the shortest distance from the query point to the partitioning plane. This latter distance is that between the query point and the closest possible point that could exist in the half-space not searched. If this distance is greater than that returned in the earlier result, then clearly there is no need to search the other half-space. If there is such a need, then you must go through the trouble of solving the problem for the other half space, and then compare its result to the former result, and then return the proper result. The performance of this algorithm is nearer to logarithmic time than linear time when the query point is near the cloud, because as the distance between the query point and the closest point-cloud point nears zero, the algorithm needs only perform a look-up using the query point as a key to get the correct result.

Approximation methods

[edit]

An approximate nearest neighbor search algorithm is allowed to return points whose distance from the query is at most times the distance from the query to its nearest points. The appeal of this approach is that, in many cases, an approximate nearest neighbor is almost as good as the exact one. In particular, if the distance measure accurately captures the notion of user quality, then small differences in the distance should not matter.[9]

Greedy search in proximity neighborhood graphs

[edit]

Proximity graph methods (such as navigable small world graphs[10] and HNSW[11][12]) are considered the current state-of-the-art for the approximate nearest neighbors search.

The methods are based on greedy traversing in proximity neighborhood graphs in which every point is uniquely associated with vertex . The search for the nearest neighbors to a query q in the set S takes the form of searching for the vertex in the graph . The basic algorithm – greedy search – works as follows: search starts from an enter-point vertex by computing the distances from the query q to each vertex of its neighborhood , and then finds a vertex with the minimal distance value. If the distance value between the query and the selected vertex is smaller than the one between the query and the current element, then the algorithm moves to the selected vertex, and it becomes new enter-point. The algorithm stops when it reaches a local minimum: a vertex whose neighborhood does not contain a vertex that is closer to the query than the vertex itself.

The idea of proximity neighborhood graphs was exploited in multiple publications, including the seminal paper by Arya and Mount,[13] in the VoroNet system for the plane,[14] in the RayNet system for the ,[15] and in the Navigable Small World,[10] Metrized Small World[16] and HNSW[11][12] algorithms for the general case of spaces with a distance function. These works were preceded by a pioneering paper by Toussaint, in which he introduced the concept of a relative neighborhood graph.[17]

Locality sensitive hashing

[edit]

Locality sensitive hashing (LSH) is a technique for grouping points in space into 'buckets' based on some distance metric operating on the points. Points that are close to each other under the chosen metric are mapped to the same bucket with high probability.[18]

Nearest neighbor search in spaces with small intrinsic dimension

[edit]

The cover tree has a theoretical bound that is based on the dataset's doubling constant. The bound on search time is O(c12 log n) where c is the expansion constant of the dataset.

[edit]

In the special case where the data is a dense 3D map of geometric points, the projection geometry of the sensing technique can be used to dramatically simplify the search problem. This approach requires that the 3D data is organized by a projection to a two-dimensional grid and assumes that the data is spatially smooth across neighboring grid cells with the exception of object boundaries. These assumptions are valid when dealing with 3D sensor data in applications such as surveying, robotics and stereo vision but may not hold for unorganized data in general. In practice this technique has an average search time of O(1) or O(K) for the k-nearest neighbor problem when applied to real world stereo vision data.[4]

Vector approximation files

[edit]

In high-dimensional spaces, tree indexing structures become useless because an increasing percentage of the nodes need to be examined anyway. To speed up linear search, a compressed version of the feature vectors stored in RAM is used to prefilter the datasets in a first run. The final candidates are determined in a second stage using the uncompressed data from the disk for distance calculation.[19]

[edit]

The VA-file approach is a special case of a compression based search, where each feature component is compressed uniformly and independently. The optimal compression technique in multidimensional spaces is Vector Quantization (VQ), implemented through clustering. The database is clustered and the most "promising" clusters are retrieved. Huge gains over VA-File, tree-based indexes and sequential scan have been observed.[20][21] Also note the parallels between clustering and LSH.

Variants

[edit]

There are numerous variants of the NNS problem and the two most well-known are the k-nearest neighbor search and the ε-approximate nearest neighbor search.

k-nearest neighbors

[edit]

k-nearest neighbor search identifies the top k nearest neighbors to the query. This technique is commonly used in predictive analytics to estimate or classify a point based on the consensus of its neighbors. k-nearest neighbor graphs are graphs in which every point is connected to its k nearest neighbors.

Approximate nearest neighbor

[edit]

In some applications it may be acceptable to retrieve a "good guess" of the nearest neighbor. In those cases, we can use an algorithm which doesn't guarantee to return the actual nearest neighbor in every case, in return for improved speed or memory savings. Often such an algorithm will find the nearest neighbor in a majority of cases, but this depends strongly on the dataset being queried.

Algorithms that support the approximate nearest neighbor search include locality-sensitive hashing, best bin first and balanced box-decomposition tree based search.[22]

Nearest neighbor distance ratio

[edit]

Nearest neighbor distance ratio does not apply the threshold on the direct distance from the original point to the challenger neighbor but on a ratio of it depending on the distance to the previous neighbor. It is used in CBIR to retrieve pictures through a "query by example" using the similarity between local features. More generally it is involved in several matching problems.

Fixed-radius near neighbors

[edit]

Fixed-radius near neighbors is the problem where one wants to efficiently find all points given in Euclidean space within a given fixed distance from a specified point. The distance is assumed to be fixed, but the query point is arbitrary.

All nearest neighbors

[edit]

For some applications (e.g. entropy estimation), we may have N data-points and wish to know which is the nearest neighbor for every one of those N points. This could, of course, be achieved by running a nearest-neighbor search once for every point, but an improved strategy would be an algorithm that exploits the information redundancy between these N queries to produce a more efficient search. As a simple example: when we find the distance from point X to point Y, that also tells us the distance from point Y to point X, so the same calculation can be reused in two different queries.

Given a fixed dimension, a semi-definite positive norm (thereby including every Lp norm), and n points in this space, the nearest neighbour of every point can be found in O(n log n) time and the m nearest neighbours of every point can be found in O(mn log n) time.[23][24]

See also

[edit]

References

[edit]

Citations

[edit]
  1. ^ Cayton, Lawerence (2008). "Fast nearest neighbor retrieval for bregman divergences". Proceedings of the 25th International Conference on Machine Learning. pp. 112–119. doi:10.1145/1390156.1390171. ISBN 9781605582054. S2CID 12169321.
  2. ^ Qiu, Deyuan, Stefan May, and Andreas Nüchter. "GPU-accelerated nearest neighbor search for 3D registration." International conference on computer vision systems. Springer, Berlin, Heidelberg, 2009.
  3. ^ Becker, Ducas, Gama, and Laarhoven. "New directions in nearest neighbor searching with applications to lattice sieving." Proceedings of the twenty-seventh annual ACM-SIAM symposium on Discrete algorithms (pp. 10-24). Society for Industrial and Applied Mathematics.
  4. ^ a b Bewley, A.; Upcroft, B. (2013). Advantages of Exploiting Projection Structure for Segmenting Dense 3D Point Clouds (PDF). Australian Conference on Robotics and Automation.
  5. ^ Weber, Roger; Schek, Hans-J.; Blott, Stephen (1998). "A quantitative analysis and performance study for similarity search methods in high dimensional spaces" (PDF). VLDB '98 Proceedings of the 24rd International Conference on Very Large Data Bases. pp. 194–205.
  6. ^ Andrew Moore. "An introductory tutorial on KD trees" (PDF). Archived from the original (PDF) on 2025-08-07. Retrieved 2025-08-07.
  7. ^ Lee, D. T.; Wong, C. K. (1977). "Worst-case analysis for region and partial region searches in multidimensional binary search trees and balanced quad trees". Acta Informatica. 9 (1): 23–29. doi:10.1007/BF00263763. S2CID 36580055.
  8. ^ Roussopoulos, N.; Kelley, S.; Vincent, F. D. R. (1995). "Nearest neighbor queries". Proceedings of the 1995 ACM SIGMOD international conference on Management of data – SIGMOD '95. p. 71. doi:10.1145/223784.223794. ISBN 0897917316.
  9. ^ Andoni, A.; Indyk, P. (2025-08-07). "Near-Optimal Hashing Algorithms for Approximate Nearest Neighbor in High Dimensions". 2006 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS'06). pp. 459–468. CiteSeerX 10.1.1.142.3471. doi:10.1109/FOCS.2006.49. ISBN 978-0-7695-2720-8.
  10. ^ a b Malkov, Yury; Ponomarenko, Alexander; Logvinov, Andrey; Krylov, Vladimir (2012), Navarro, Gonzalo; Pestov, Vladimir (eds.), "Scalable Distributed Algorithm for Approximate Nearest Neighbor Search Problem in High Dimensional General Metric Spaces", Similarity Search and Applications, vol. 7404, Berlin, Heidelberg: Springer Berlin Heidelberg, pp. 132–147, doi:10.1007/978-3-642-32153-5_10, ISBN 978-3-642-32152-8, retrieved 2025-08-07
  11. ^ a b Malkov, Yury; Yashunin, Dmitry (2016). "Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs". arXiv:1603.09320 [cs.DS].
  12. ^ a b Malkov, Yu A.; Yashunin, D. A. (2025-08-07). "Efficient and Robust Approximate Nearest Neighbor Search Using Hierarchical Navigable Small World Graphs". IEEE Transactions on Pattern Analysis and Machine Intelligence. 42 (4): 824–836. arXiv:1603.09320. doi:10.1109/TPAMI.2018.2889473. ISSN 0162-8828. PMID 30602420.
  13. ^ Arya, Sunil; Mount, David (1993). "Approximate Nearest Neighbor Queries in Fixed Dimensions". Proceedings of the Fourth Annual {ACM/SIGACT-SIAM} Symposium on Discrete Algorithms, 25–27 January 1993, Austin, Texas.: 271–280.
  14. ^ Olivier, Beaumont; Kermarrec, Anne-Marie; Marchal, Loris; Rivière, Etienne (2006). "Voro Net: A scalable object network based on Voronoi tessellations" (PDF). 2007 IEEE International Parallel and Distributed Processing Symposium. Vol. RR-5833. pp. 23–29. doi:10.1109/IPDPS.2007.370210. ISBN 1-4244-0909-8. S2CID 8844431.
  15. ^ Olivier, Beaumont; Kermarrec, Anne-Marie; Rivière, Etienne (2007). "Peer to Peer Multidimensional Overlays: Approximating Complex Structures". Principles of Distributed Systems. Lecture Notes in Computer Science. Vol. 4878. pp. 315–328. CiteSeerX 10.1.1.626.2980. doi:10.1007/978-3-540-77096-1_23. ISBN 978-3-540-77095-4.
  16. ^ Malkov, Yury; Ponomarenko, Alexander; Krylov, Vladimir; Logvinov, Andrey (2014). "Approximate nearest neighbor algorithm based on navigable small world graphs". Information Systems. 45: 61–68. doi:10.1016/j.is.2013.10.006. S2CID 9896397.
  17. ^ Toussaint, Godfried (1980). "The relative neighbourhood graph of a finite planar set". Pattern Recognition. 12 (4): 261–268. Bibcode:1980PatRe..12..261T. doi:10.1016/0031-3203(80)90066-7.
  18. ^ A. Rajaraman & J. Ullman (2010). "Mining of Massive Datasets, Ch. 3".
  19. ^ Weber, Roger; Blott, Stephen. "An Approximation-Based Data Structure for Similarity Search" (PDF). S2CID 14613657. Archived from the original (PDF) on 2025-08-07. {{cite journal}}: Cite journal requires |journal= (help)
  20. ^ Ramaswamy, Sharadh; Rose, Kenneth (2007). "Adaptive cluster-distance bounding for similarity search in image databases". ICIP.
  21. ^ Ramaswamy, Sharadh; Rose, Kenneth (2010). "Adaptive cluster-distance bounding for high-dimensional indexing". TKDE.
  22. ^ Arya, S.; Mount, D. M.; Netanyahu, N. S.; Silverman, R.; Wu, A. (1998). "An optimal algorithm for approximate nearest neighbor searching" (PDF). Journal of the ACM. 45 (6): 891–923. CiteSeerX 10.1.1.15.3125. doi:10.1145/293347.293348. S2CID 8193729. Archived from the original (PDF) on 2025-08-07. Retrieved 2025-08-07.
  23. ^ Clarkson, Kenneth L. (1983), "Fast algorithms for the all nearest neighbors problem", 24th IEEE Symp. Foundations of Computer Science, (FOCS '83), pp. 226–232, doi:10.1109/SFCS.1983.16, ISBN 978-0-8186-0508-6, S2CID 16665268.
  24. ^ Vaidya, P. M. (1989). "An O(n log n) Algorithm for the All-Nearest-Neighbors Problem". Discrete and Computational Geometry. 4 (1): 101–115. doi:10.1007/BF02187718.

Sources

[edit]

Further reading

[edit]
  • Shasha, Dennis (2004). High Performance Discovery in Time Series. Berlin: Springer. ISBN 978-0-387-00857-8.
[edit]
  • Nearest Neighbors and Similarity Search – a website dedicated to educational materials, software, literature, researchers, open problems and events related to NN searching. Maintained by Yury Lifshits
  • Similarity Search Wiki – a collection of links, people, ideas, keywords, papers, slides, code and data sets on nearest neighbours
颈动脉斑块看什么科 睡不着吃什么药最有效 大白菜什么时候种 氨基酸有什么作用 鱼油什么牌子好
什么地看 血糖高的人早餐吃什么最好 维生素d3什么牌子好 西酞普兰为什么早晨吃 8月29日什么星座
正官正印是什么意思 海参什么季节吃好 6969是什么意思 肠胃炎能吃什么 插队是什么意思
肺部磨玻璃结节需要注意什么 正常白带什么样 ld是什么意思 朱砂五行属什么 假象是什么意思
喝红枣水有什么好处和坏处hcv9jop3ns2r.cn 苦涩是什么意思hcv9jop7ns2r.cn 淋巴细胞百分比高是什么原因hcv9jop0ns5r.cn 灭活是什么意思hcv9jop7ns4r.cn 港股通是什么hcv8jop0ns9r.cn
萃的意思是什么hcv8jop1ns9r.cn 什么叫肺纤维化hcv8jop3ns7r.cn led什么意思hcv7jop9ns0r.cn 抽血能查出什么hcv9jop6ns6r.cn 咽喉痛吃什么药hcv8jop6ns5r.cn
梵高的星空表达了什么sanhestory.com 肺气泡是什么病hcv9jop3ns4r.cn 物以类聚是什么意思gysmod.com 颈椎反弓是什么意思hcv8jop5ns2r.cn 过敏输液输什么药好hcv8jop9ns3r.cn
吃得苦中苦方为人上人是什么意思hcv7jop9ns0r.cn 长期尿黄可能是什么病hcv9jop2ns6r.cn 泪点低什么意思bjhyzcsm.com 氟哌噻吨美利曲辛片治什么病hcv8jop1ns1r.cn 什么是肠痉挛hcv8jop3ns2r.cn
百度