首页 > 其他分享 >(段设期中复习) Great Ideas in Algorithm Analysis: Midterm Review

(段设期中复习) Great Ideas in Algorithm Analysis: Midterm Review

时间:2023-11-19 12:11:21浏览次数:37  
标签:Great 段设 log Algorithm time tree most edges each

Distance Algorithms

Basic sampling lemma: Let \(S_1,\dots,S_n \subset [n]\) be sets of size at least \(D\), then randomly choose \(c (n/D) \log n\) elements will make each \(S_i\) contain at least one element.

2-Additive Approximation of APSP

Sample \(\tilde O(n^{1/2})\) points be \(D_1\) such that all vertices with degree \(\geq n^{1/2}\) is adjacent to at least one of them. First compute the distance from each \(D_1\). Then run APSP over \(E_2 \cup \delta(V \times D_1)\), where \(E_2\) are the edges have \(u\notin V_1\) or \(v\notin V_1\). Since \(|E_2| \ll n^{3/2}\), the time complexity is \(\tilde O(n^{5/2})\).

The correctness come from that if there is a \(V_1\) point on the path, then we have an adjacent point in \(D_1\). Thus we have an \(2\)-additive approximation.

A further improvement: Let \(V_1\) be those degree \(\geq n^{2/3}\) and \(V_2\) be those degree \(\geq n^{1/3}\), and \(D_1,D_2\) respectively. Let \(E_2\) be edges \(u_1\notin V_1\) or \(v_1\notin V_1\), and \(E_3\) be edges \(u_2\notin V_2\) or \(v_2\notin V_2\).

Let \(\delta\) be the distances from \(D_1\), and \(\delta'\) be the distances from \(D_2\) under \(E_2\). For each \(u\), compute the answer over \(E_3 \cup E^* \cup \delta \cup \delta'(u\times D_2)\), where \(E^*\) are edges connecting \(V_2\) and \(D_2\). Clearly this has time complexity \(\tilde O(n^{7/3})\).

If a path has no \(V_2\) points, clearly will be covered. If it has a \(V_1\) point, will be covered by \(\delta\). For the rest case, let \(w\) be the last \(V_2\) point, \((u, w')\) is covered by \(\delta'\), and \((w', v)\) is covered by \(E^*\) and the rest edges.

3-Multiplicative Approximation of APSP

Under some preparation, we can compute the nearest \(b\) elements in \(\tilde O(b^2)\) time.

For each point, compute for \(b = n^{2/3}\), and sample a hitting set \(D\) of size \(\tilde O(n^{1/3})\). This needs time \(\tilde O(n^{7/3})\). For \(u, v\) such that \(v\) doesn't lie in the ball of \(u\), use the \(D\) point in the ball as a middle point. This gives a \(3\)-multiplicative approximation.

Fibonacci Heap and its Applications

We first consider how to maintain a heap only supporting merge (this immediately maintains insert) and extract-min.

We maintain a list, locating on the minimum, each element of the list is a binomial heap.

When we merge to lists, we simply locate the new minimum. When we extract-min, we first remove the minimum, and we should find the new minimum. We can simply merge the binomial heaps with the same rank. Let potential \(\Phi\) be the size of the list, then the amortized time complexity is \(O(\log n)\), and for insert it's \(O(1)\).

Now we consider how to support decrease-key. When a node is decreased, we cut it from its parent, and insert it into the list. But this could make the rank very large.

We let each node has a mark. When we cut a node, we mark its parent, when the parent is already marked, we also cut it and recursively check its parent.

This guarantees that each node has at most one cut child, for a rank \(r\) node (here we use degree to be rank), the number of its subtree is \(\gg \phi^r\). This gives the upper bound \(O(\log n)\) on the rank of tree.

Minimum Spanning Tree

Recall that we now have two different approaches.

  • Prim's algorithm, using Fibonacci heap, achieves \(O(m + n \log n)\) time.
  • Bor\r{u}vka's algorithm, achieves \(O(m\log n)\) time.

Combine them: Let \(k = m / n\), each phase we run Prim until we reach a connected component of size \(2^k\). In the last of each phase, we contract the graph in the Bor\r{u}vka sense. This gives us \(O(m + n \log(2^k)) = O(m)\) time to contract the graph into size \(n' = n / 2^k\). Thus \(k' = m'/n' = 2^k\). The total time complexity is \(O(m \beta(m, n))\). Here \(\beta(m, n) = \min\{ i : \log^{(i)}n \leq m/n \}\).

Union-Find and Inverse Ackermann function

First optimization is the rank heuristic, it makes the depth \(\ll \log n\), thus gives time
complexity \(O(\log n)\).

Second optimization is the path compression.

The first analysis is for a compression, in a find operation, all but at most \(O(\log \log n)\) nodes don't have their \(r(p(x)) - r(x)\) multiply by \(2\). So take the potential be \(\Phi = \sum \log(r(p(x))-r(x))\), gives an upper bound of amortized time complexity \(O(\log \log n)\).

A better analysis: Let \(T(m, r)\) be the time complexity of \(m\) operations with rank \(\leq r\). Then for those rank \(\geq s\), they have at most \(n/2^s\) points. So for those compressions on the high rank vertices, total time is at most \(m/2^s \cdot r\). This gives a bound \(T(m, r) \leq T(m, s) + O(m/2^s \cdot r + m)\). Take \(s = \log r\), we have
\(T(m, r) \leq O(m\log^* m)\).

A further analysis will give the bound \(O(m\alpha(m))\).

Semigroup Partial Sum

Well known in OI.

Splay Tree, Link/Cut Tree and ET-Tree

Well known in OI.

Dynamic Connectivity

Holm–de Lichtenberg–Thorup Algorithm

Intuition: We first maintain a spanning forest of the graph, the problem is that when we delete a tree edge, we need to find an edge that can replace it. We brute force on the smaller component and enumerate the non-tree edges on them. And we also need to prevent each edge from being enumerated so many times. The key point is to push those probed edges into a smaller tree.

Let \(E^{(i)}\) be a nested sequence of edge sets, where \(\bigsqcup_i E^{(i)} = E\). Let \(F^{(i)}\) be the spanning forest of \(E^{(0)} \sqcup \cdots \sqcup E^{(i)}\), and satisfying \(F^{(i)} \subset F^{(i-1)}\). When we delete a tree edge, for those probed edges, we push them to a higher level.

By induction, we can prove that each connected component in \(F^{(i)}\) has size at most \(n / 2^i\). This shows that each edge will be probed at most \(O(\log n)\) times. The amortized time complexity is \(O(\log^2 n)\).

Frederickson's Worst-Case Algorithm

First split the graph into bounded degree, then each tree can be partitioned into \(\Theta(m/B)\) blocks of size \(\Theta(B)\).

Use a worst-case dynamic tree to maintain the extra edges between blocks. Take \(B = \sqrt m\), we can find the replacing edge in \(\tilde O (\sqrt m)\) time.

Comment: This is also capable for maintaining MST!

Eppstein–Galil–Italiano–Nissenzweig's Sparsification Trick

Suppose we maintain two forests \(F_1, F_2\), then each time there are minor changes of \(F_1\) or \(F_2\), we can use the above algorithm to maintain the spanning forest of \(F_1\cup F_2\) in time \(\tilde O(\sqrt n)\).

So we can maintain the whole structure in a binary tree, in time \(\tilde O(\sqrt n)\).

Some Maximum Flow Algorithms

Capacity Scaling

Each phase might augment \(m\) paths at most, so the time is \(O(m^2 \log U)\).

Edmonds–Karp Algorithm

Each time augments a shortest path. For a shortest path DAG, lose one edge each time. So each \(m\) augmentations will increase the length of the shortest path by at least \(1\). There are at most \(mn\) augmentations, so the time complexity is \(O(m^2n)\).

Dinic's Algorithm

Use dynamic tree to find an augmenting path. Each path only takes \(O(\log n)\) time, so the total time complexity is \(O(mn\log n)\).

Or a naive implementation without dynamic tree, has time \(O(mn^2)\).

Analysis on Unit Capacity Graph

Firstly, the blocking flow only takes \(O(m)\) time, since each time we will use all the edges.

After \(n^{2/3}\) phases, the distance from \(s\) to \(t\) is at least \(n^{2/3}\). By pigeonhole principle, there are two adjacent levels with total size at most \(n^{1/3}\), so the rest flow is at most \(n^{2/3}\).

Another analysis: After \(m^{1/2}\) phases, the distance from \(s\) to \(t\) is at least \(m^{1/2}\). By pigeonhole principle, there is a level with at most \(m^{1/2}\) edges between, so the rest flow is at most \(m^{1/2}\).

This gives an upper bound \(O(m \min \{ n^{2/3}, m^{1/2} \})\).

Karger–Levine's Algorithm over Undirected Unit Capacity Graph

We first show that in a unit capacity graph, an acyclic flow has at most \(O(n^{3/2})\) edges. Suppose the shortest path has length \(d\), the preceding argument shows that the maximum flow is at most \(O((n/d)^2)\), thus for a \(v\)-flow, the shortest path is \(O(n / v^{1/2})\). So the total number of edges is \(\ll n \cdot (1 / 1^{1/2} + \cdots + 1/v^{1/2}) \ll n v^{1/2}\).

Using fast dynamic connectivity, we can maintain the directed graph given by the residual network, which only has \(O(n v^{1/2})\) edges. So the total time complexity is \(\tilde O(n v^{3/2})\).

Min-Cut and Gomory–Hu Tree

Karger's Algorithm

Consider the following randomized strategy: Select a random edge and contract it, until there are two vertices. For the first \(i\) contraction, the minimum cut is preserved with probability at least \(\sim (n-i)^2 / n^2\). Each time we contract to \(n-i = n/\sqrt 2\) and then try two times.

The time is \(T(n) = 2T(n/\sqrt 2) + O(n^2)\), the master theorem gives \(T(n) = O(n^2\log n)\).

The success probability gives \(P(n) \geq 1 - (1 - P(n / \sqrt 2) / 2)^2\), one can prove that \(P(n) \gg 1/\log n\). So repeating \(O(\log n)\) times, i.e., with time \(O(n^2\log^2 n)\), gives constant probability of finding a minimum cut.

Gomory–Hu Tree

Let \(f(s, t)\) be the minimum cut between \(s\) and \(t\). Let \(\delta(S)\) be the cut value between \(S\) and \(V\smallsetminus S\). It's not hard to see that \(\delta\) is symmetric and submodular, i.e., \(\delta(S) + \delta(T) \geq \delta(S\cap T) + \delta(S\cup T)\).

The construction of Gomory–Hu tree is based on the following induction. We have a tree over a partition \(\bigsqcup V_i = V\), for adjacent \((V_i, V_j)\), there exists a representative \(s_i\) in each block, such that the partition given by removing the edge \((V_i, V_j)\) is a minimum cut of \((s_i, s_j)\).

Now suppose we want to refine \(V_i\), select another representative \(s'\), compute a minimum cut between \(s_i\) and \(s'\), for each adjacent block, by submodularity, it's not hard to prove that it can be completely lying in \(S\) or not in \(S\). This gives a partition on the adjacent edges. The two cases can be verified separately.

Whenever the Gomory–Hu tree is completely done, for each two points \(s, t\), we can prove that \(f(s, t) \geq \min \{ f(x_0, x_1), \dots, f(x_{\ell-1}, x_\ell) \}\) where \(x_0 = s, x_\ell = t\). And since now each one gives an \((s, t)\) cut, the equality is achieved.

So our Gomory–Hu tree gives an equivalent graph, the minimum cut on such tree is exactly the same with the original graph.

标签:Great,段设,log,Algorithm,time,tree,most,edges,each
From: https://www.cnblogs.com/Elegia/p/algorithm-analysis-midterm-review.html

相关文章

  • 为React Ant-Design Table增加字段设置
    最近做的几个项目经常遇到这样的需求,要在表格上增加一个自定义表格字段设置的功能。就是用户可以自己控制那些列需要展示。在几个项目里都实现了一遍,每个项目的需求又都有点儿不一样,迭代了很多版,所以抽时间把这个功能封装了个组件:@silverage/table-custom,将这些差别都集成了进去,......
  • Princeton Algorithms, Part I week2 Merge Sort
    Mergesort今天学习mergesort这个排序算法的思想就是,不停的将数组二分,再将两个子数组不停归并。其中有一个操作叫merge如下图所示。左右两边两个部分是有序的,然后思想也很简单有两个指针i和j,i指向lo,j指向mid+1,然后比较两个指针所指的大小,如果小就选出来排到数组中,如果i大于mid......
  • [洛谷 P3481] [BZOJ1118] [POI2009] PRZ-Algorithm Speedup
    题目描述你需要计算一个函数\(F(x,y)\),其中\(x,y\)是两个正整数序列。boolF(std::vector<int>x,std::vector<int>y){if(W(x).size()!=W(y).size())returnfalse;if(W(x).size()==1)returntrue;returnF(p(x),p(y))&&F(s(x),s(y));}\(W......
  • JSch连接SSH问题Exception:Algorithm negotiation fail
    Java连接RPA系统,由于特殊原因不能使用接口,决定用openssh连接,定时读取与推送。注意点:1、C:\ProgramData\ssh\sshd_config配置2、ssh-keygen-trsa生成秘钥方式3、生成之后追加到authorized_keys编码格式utf-84、authorized_keys后缀5、com.jcraft.jsch长时间没有更新,windo......
  • Proj. Unknown: Deciding Differential Privacy of Online Algorithms with Multiple
    Paperhttps://arxiv.org/abs/2309.06615Abstract背景:自动机A被称作查分隐私自动机:当对某些D,对任何隐私预算ε>0,该自动机是Dε-differentiallyprivate(ADiPautomatonisaparametricautomatonwhosebehaviordependsontheprivacybudget......
  • Princeton Algorithms, Part I week2 stack&queue
    stack:先进后出queue:先进先出首先是stack有两种实现方式,第一种是用链表,第二种是用数组。Stack:linked-listrepresentation   stack:arrayimplementation  上面这个实现是固定长度的arrayimplementation非常不方便,所以引入可变长度的实现resizing-array......
  • 基于rk3588----i2c驱动框架学习(2)-总线驱动 algorithm 分析
    rk3588i2calgorithm分析来了来了,上次分析完i2c的驱动框架今天我们就看看i2c的algorithm是如何实现的staticconststructi2c_algorithmrk3x_i2c_algorithm={.master_xfer=rk3x_i2c_xfer,.master_xfer_atomic......
  • [PG] Function Candidates Selection Algorithm
    FunctionCandidatesSelectionAlgorithmenvironmentsetupInlightdborafceextension,executesqlbelow,CREATEDOMAINoracle.clobASTEXT;--version1CREATEFUNCTIONoracle.btrim(text,text)RETURNStextAS'btrim'LANGUAGEinternalSTRICT......
  • 解决self.draw.draw_rectangle(xy, fill, 1) ValueError: y1 must be greater tha
    我尝试了很多方法,包括单不限于改labelme文件的直接报错,修改pillow包的原文件尝试注释掉raise的地方。最后都以失败告终。还有尝试重新安装最新版的包,来解决。最后经过多次尝试后发现,发生错误的地方的文件是有问题的,至于是什么问题到现在也不知道,那就删除最后停止位置时......
  • javascript: Sorting Algorithms
      /***fileSort.js*ide:vscodeJavaScriptSortingAlgorithms*插件:IntelliSense,JSDoc,CodeLens,DebuggerforChrome,静态代码检查:ESLint,JSHint,FlowLangugaeSupport,StandardJS-JavaScriptStandardStyle,koroFileHeader(文件头注释),测试插件:Mochasideba......