answersLogoWhite

0


Best Answer

Rip V1 is Classful routing protocol

Rip V2 is Classless routing Protocol

User Avatar

Wiki User

14y ago
This answer is:
User Avatar

Add your answer:

Earn +20 pts
Q: Difference between of rip v1 and rip v2?
Write your answer...
Submit
Still have questions?
magnify glass
imp
Continue Learning about Basic Math

What is the difference between rip v1 rip v2?

Particulars RIP V1 RIP V2 VLSM Support NO Route Propogation Mechanism Broadcast (255.255.255.255 ) Multicast (224.0.0.9) Authentication Mechanism No Yes ( Text & MD5 )----RIP V1-----> Classful routing protocol.RIP V2-----> Classless routing protocol.-----------------------------------------------------RIP V1------> Subnet masks are NOT included in the routing update.RIP V2------> Subnet masks are included in the routing update.-----------------------------------------------------RIP V2 is actually an enhancement of RIP V1's features and extensions raether than an entirely new protocol.


What is the equation for the percentage difference of 20 and 31?

( | V1 - V2 | / ((V1 + V2)/2) ) * 100


What is lexicographic max min algorithm and explain?

Lexicographic Max-Min Fairness in a Wireless Ad Hoc Network with Random Access Xin Wang, Koushik Kar, and Jong-Shi Pang ¤ Abstract We consider the lexicographic max-min fair rate control problem at the link layer in a random access wireless network. In lexicographic max-min fair rate allocation, the min- imum link rates are maximized in a lexicographic order. For the Aloha multiple access model, we propose iterative approaches that attain the optimal rates under very general assumptions on the network topology and communication pattern; the approaches are also amenable to distributed implementation. The algorithms and results in this paper gener- alize those in our previous work [7] on maximizing the minimum rates in a random access network, and nicely connects to the \bottleneck-based" lexicographic rate optimization algorithm popularly used in wired networks [1]. 1 Introduction In a wireless network, the Medium Access Control (MAC) protocol de¯nes rules by which nodes regulate their transmission onto the shared broadcast channel. An e±cient MAC proto- col should ensure high system throughput, and distribute the available bandwidth fairly among the competing nodes. In this paper, we consider the problem of optimizing a random access MAC protocol with the goal of attaining lexicographic max-min fair rate allocations at the link layer. Fairness is a key consideration in designing MAC protocols, and the lexicographic max-min fairness metric [1] is one of the most widely used notions of fairness. The objective, stated simply, is to maximize the minimum rates in a lexicographic manner. More speci¯cally, a lexicographic max-min fair rate allocation algorithm should maximize the minimum rate, then maximize the second minimum rate, then maximize the third minimum rate, and so on. In our previous work [7], we have proposed algorithms that maximizes the minimum rate in a wireless ad hoc network in a distributed manner, and showed that the proposed algorithms can achieve lexicographic max-min fairness under very restrictive \symmetric" communication patterns. However, the question of achieving lexicographic max-min fair rate allocation in a more general wireless ad hoc network, preferably in a distributed manner, remained an open question. In this paper, we propose algorithms that solve this question. ¤ The ¯rst two authors are with the Electrical, Computer, and Systems Engineering Department, and the third author is with the Mathematical Sciences Department. All authors are at Rensselaer Polytechnic Institute, Troy, NY 12180, USA. fwangx5,kark,pangjg@rpi.edu 1Speci¯cally, we propose two multi-step approaches that can achieve lexicographic max- min fair allocation. Intuitively, both algorithms attain lexicographic max-min fairness in the network by solving a sequence of max-min rate optimization problem and identifying bottleneck links at each step. Loosely speaking, a bottleneck link is a link that has the minimum rate in the network, in all possible optimal allocations. We will prove rigorously the convergence of the two algorithms, and discuss the possibility of implementing the algorithms in a distributed manner. The paper is organized as follows. Section 2 describes the system model and problem formulation. Section 3 provides a few important de¯nitions which are used later in describing the solution approach. In Section 4, we propose an approach for providing lexicographic max- min fair rate, which is based on identifying a subset of the bottleneck links. In Section 5, we discuss how we can identify all bottleneck links so as to improve the e±ciency of the algorithm. Section 5 concludes the work, and all proofs are presented in the appendix. 2 Problem Formulation 2.1 System Model A wireless network can be modelled as an undirected graph G = (N; E), where N and E respectively denote the set of nodes and the set of undirected edges. An edge exists between two nodes if and only if they can receive each other's signals (we assume a symmetric hearing matrix). Note that there are 2jEj possible communication pairs, but only a subset of these may be actively communicating. The set of active communication pairs is represented by the set of links, L. Each link (i; j) 2 L is always backlogged. Without loss of generality we assume that all the nodes share a single wireless channel of unit capacity. For any node i, the set of i's neighbors, Ki = fj : (i; j) 2 Eg, represents the set of nodes that can receive i's signals. For any node i, the set of out-neighbors of i, Oi = fj : (i; j) 2 Lg µ Ki , represents the set of neighbors to which i is sending tra±c. Also, for any node i, the set of in-neighbors of i, Ii = fj : (j; i) 2 Lg µ Ki , represents the set of neighbors from which i is receiving tra±c. A transmission from node i reaches all of i's neighbors. Each node has a single transceiver. Thus, a node can not transmit and receive simultaneously. We do not assume any capture, i.e., node j can not receive any packet successfully if more than one of its neighbors are transmitting simultaneously. Therefore, a transmission in link (i; j) 2 L is successful if and only if no node in Kj [ fjg n fig, transmits during the transmission on (i; j). We focus on random access wireless networks, and use the slotted Aloha model [1] for modeling interference and throughput. In this model, i transmits a packet with probability Pi in a slot. If i does not have an outgoing edge, i.e., Oi = Á, then Pi = 0. Once i decides to transmit in a slot, it selects a destination j 2 Oi with probability pij=Pi , where P j2Oi pij = Pi . Therefore, in each slot, a packet is transmitted on link (i; j) with probability pij . Let p = (pij ;(i; j) 2 L) be the vector of transmission probabilities on all edges, and let Pf denote the feasible region for p, i.e. Pf = fp : 0 · pij · 1; 8(i; j) 2 L; Pi = P j2Oi pij ; 0 · Pi · 1; 8i 2 2Ng. Then, the rate or throughput on link (i; j), xij , is given by xij (p) = pij (1 ¡ Pj ) Y k2Kjnfig (1 ¡ Pk); p 2 Pf : (1) Note that (1 ¡ Pj ) Q k2Kjnfig (1 ¡ Pk) is the probability that a packet transmitted on link (i; j) is successfully received at j. 2.2 Lexicographic Max-Min Fair Rate Let x = (xij ;(i; j) 2 L) denote the vector of rates for all links in the active communication set E (also referred to as the allocation vector), and ~x be the allocation vector x sorted in nondecreasing order. An allocation vector x1 is said to be lexicographically greater than another allocation vector x2, denoted by x1  x2, if the ¯rst non-zero component of ~x1 ¡ ~x2 is positive. Consequently, an allocation vector x1 is said to be lexicographically no less than than another allocation vector x2, denoted by x1 º x2, if ~x1 ¡ ~x2 = 0, or the ¯rst non-zero component of ~x1 ¡ ~x2 is positive. A rate allocation is said to be lexicographic max-min fair if the corresponding rate alloca- tion vector is lexicographically no less than any other feasible rate allocation vector. In the lexicographic max-min fair rate allocation vector, therefore, a rate component can be increased only at the cost of decreasing a rate component of equal or lesser value, or by making the vector infeasible. 3 Preliminaries In this section we introduce a few de¯nitions which are used later in the paper in describing our solution approach, and stating and proving our results. We also de¯ne the max-min fair rate allocation problem, and the concept of bottleneck link, which are two main components of our solution approach in solving the lexicographic max-min fair rate allocation problem. 3.1 Directed Link Graph and Its Component Graph 3.1.1 Directed Link Graph Recall that the transmission on link (i; j) is successful if and only if node j, as well as all neighbors of node j (except node i), are silent. From this, it is straightforward to see that the interference relationship between two links (i; j) and (s; t), may not be symmetric. As an example, consider two links (i; j) and (j; k), i =6 k. Obviously, transmission on (i; j) is successful only if (j; k) is silent. However, if i is not in the neighborhood of k, then the successful transmission on link (j; k) does not require that link (i; j) be silent. We de¯ne a directed graph, called directed link graph GL = (VL; EL), where each vertex stands for a link in the original network. There is an edge from link (i; j) to link (s; t) in the directed link graph if and only if a successful transmission on link (s; t) requires that link (i; j) be silent. 3We use the notation (i; j) ; (s; t) to denote the case when there is a path from link (i; j) to link (s; t) in the directed link graph. We have the following lemma regarding to the property of the directed link graph. Lemma 1 (Proof in the Appendix) Let x ¤ ij and x ¤ st respectively denote the lexicographic max-min fair rates for the two links (i; j) and (s; t). If there is a path from (i; j) to (s; t) in a directed link graph, i.e. (i; j) ; (s; t), then we have x ¤ ij · x ¤ st . For a directed graph G = (V; E), the set of predecessors of u 2 V is de¯ned as Pu = fv 2 V : v ; ug S fug. Also, for any vertex set U µ V , we de¯ne GU = (U; EU ) as a subgraph of G for U, where EU = ((u; v) : (u; v) 2 E; u 2 U; v 2 U). 3.1.2 Component Graph In the directed graph GL = (VL; EL), a strongly connected component is a maximal set of vertices C µ V such that for every pair of vertices u and v in C, we have both v ; u and u ; v, that is, vertices u and v are reachable from each other. The following corollaries easily follow from Lemma 1. Corollary 1 The lexicographic max-min fair rates of all links belonging to the same strongly connected component of GL = (VL; EL) is the same. Corollary 2 Let C1 and C2 be two strongly connected components in the directed link graph GL = (VL; EL), and x ¤ 1 and x ¤ 2 be the lexicographic max-min fair rates for C1 and C2, respec- tively. If u 2 C1 and v 2 C2 such that u ; v, then we have x ¤ 1 · x ¤ 2 . For a directed link graph GL = (VL; EL), we can decompose it into its strongly con- nected components, and construct the component graph GL = (VL; EL), which we de¯ne as follows. Suppose GL has strongly connected components C1, C2, ..., Ck. The vertex set VL is fv1; v2; :::; vkg, and it contains a vertex vi for each strongly connected component Ci of GL. There is a directed edge (vi ; vj ) 2 EL if GL contains a directed edge (x; y) for some x 2 Ci and some y 2 Cj . Viewed another way, we obtain GL from GL by contracting all edges whose incident vertices belong to the same strongly connected component of GL. From Lemma 22.13 of [6], it follows that the component graph is a directed acyclic graph. For any v 2 VL, we denote C(v) as the set of links in C, where C is the corresponding strongly connected component in the directed link graph. For a set of vertices U µ VL, we de¯ne C(U) = S vi2U C(vi). 3.1.3 Illustrative Example We use a small wireless ad hoc network to illustrate the concept of a directed link graph. The network we consider is composed of 8 nodes and 9 links, and is shown in Fig. 1. In this graph, the set of undirected edges (computed based on the symmetric hearing matrix), E, is given by E = f(A; B);(B; C);(C; D);(D; E);(D; F);(F; G);(F; H);(G; H)g. 48 A B C D E F G H 0 1 2 3 4 5 6 7 Figure 1: An example wireless ad hoc network. In the directed link graph, there are 9 vertices, representing the 9 links. It can be seen from Fig. 1 that a successful transmission on link 0 requires that node C and its neighboring nodes (node D) keep silent. Therefore there are edges (7; 0), (1; 0) and (6; 0) in the directed link graph. Also, when link 0 is scheduled from node B, all other links from node B should not be scheduled, i.e. there is edge (8; 0) in the directed link graph. Similarly we ¯nd all other edges in the directed link graph, and the result is shown in Fig. 2. 7 0 8 1 6 2 3 4 5 Figure 2: The directed link graph for the wireless ad hoc network considered. It is obvious in Fig. 2 that there are two strongly connected components in this directed link graph, both of which are highlighted by dashed square box. The ¯rst strongly connected component, denoted as C1, contains link 0, link 1, link 6, link 7, and link 8, and the second strongly connected component, denoted as C2, contains link 2, link 3, link 4 and link 5. Also it can be seen that there are edges from vertices in C1 to vertices in C2, and therefore at the lexicographic max-min fairness, x ¤ 1 · x ¤ 2 , where x ¤ 1 and x ¤ 2 are the lexicographic max-min fair rates for links in C1 and C2, respectively. 3.2 Max-Min Fair Rate Allocation Problem The objective of max-min rate allocation is to maximize the minimum rate over all links. Note that whereas the lexicographic max-min fairness optimizes the entire sorted vector of link rates in a lexicographic manner, max-min fairness only maximizes the minimum component in the rate vector. Therefore, max-min fairness is a weaker notion of fairness compared to lexicographic max-min fairness. We will show, however, that we can solve the lexicographic 5max-min fair rate allocation problem by solving a sequence of max-min fair rate allocation problems. In our context, the max-min fair rate allocation problem can be formulated as follows [7]: max x; s.t. x · xij (p); 8(i; j)2L; p 2 Pf ; (2) where x is the max-min rate, and xij (p) is given in (1). It is worth noting that (2) is equivalent to the following convex program: max f(z); s.t. hij (z) · 0; 8(i; j)2L; (3) where z = (y; p) and y = log(x), i.e. the logarithmic value of the max-min rate. f(z) = y, and hij (z) = y ¡ log(xij (p)). Note that hij (z) is the transformed function of capacity constraint on link (i; j) 2 L and is convex. Also note that p 2 Pf is removed as the logarithmic function automatically ensures the feasibility of p. 3.3 Bottleneck Link Next we de¯ne the notion of a bottleneck link. Loosely speaking, a bottleneck link is a link that has the minimum rate in (2) and hence decides the max-min rate. De¯ne gij (x; p) = x ¡ xij (p), and denote an optimal solution to (2) as (x ¤ ; p ¤ ). It is easy to argue that x ¤ is unique while p ¤ could be non-unique. If gij (x ¤ ; p ¤ ) = 0 for any optimal solution (x ¤ ; p ¤ ), i.e. the constraint for link (i; j) is active at all optimal solutions, then link (i; j) is called a bottleneck link. An alternative de¯nition of a bottleneck link is as follows. Consider perturbing (3) with a perturbation on link (i; j), max f(z); s.t. hij (z) · ¡²; huv(z) · 0; 8(u; v) 2 L n f(i; j)g: (4) The optimal value of (4) is a function on ², and we denote it as U ¤ ij (²). We de¯ne link (i; j) as a bottleneck link if U ¤ ij (0) > U ¤ ij (²) for any positive ². It can be easily argued that this de¯nition of a bottleneck link is consistent with the previous one. The result below follows directly from Lemma 1. Corollary 3 If link (i; j) is a bottleneck link, and if links (i; j) and (s; t) belong to the same strongly connected component, then link (s; t) is also a bottleneck link. Furthermore, the following property also holds: Lemma 2 (See the proof in the appendix) If link l is a bottleneck link, and l 2 C(v) where v is a vertex in the component graph, then all links that belong to C(Pv) are bottleneck links, where Pv is the set of predecessors for v in the component graph. From Corollary 3 and Lemma 2, we can identify one or more strongly connected components (in the directed link graph) that consist only of bottleneck links. 64 Algorithm Based on Identifying a Subset of the Bottleneck Links 4.1 Identifying Bottleneck Links Using Lagrange Multipliers Direct identi¯cation of bottleneck links, using the de¯nition or the alternative de¯nition, has extremely high computational cost and maybe practically infeasible. In this section we discuss how we can identify at least one bottleneck link, using Lagrange multipliers, in an e±cient manner. We consider (3), the transformed convex program for the max-min fair rate problem. It is clear that the Slater Constraint Quali¯cation holds for equation (3). Thus the global optimality of a feasible solution of (3) is characterized by the Karush-Kuhn-Tucker (KKT) conditions: 0 = rf(z) + X (i;j)2L ¸ij rhij (z); 0 · ¸ ? h(z) · 0; (5) where h(z) is the jLj-vector with components hij (z) and ¸ is the jLj-vector with components ¸ij , the Lagrange multipliers for hij . The ? notation means orthogonality or the complementary slackness condition. The relationship between Lagrange multipliers and bottleneck links is stated in the following lemma. Its proof uses the cross complementarity property of the solutions of the KKT system, which results from the convexity of such solutions. Namely, if (z i ; ¸ i ) for i = 1; 2 are two KKT pairs, then we must have ¸ 1 ij hij (z 2 ) = 0 for all (i; j) 2 L. Lemma 3 For any KKT pair (z ¤ ;¸ ¤ ), if ¸ ¤ ij > 0, then link (i; j) is a bottleneck link. The lemma above is a special case of Lemma 4. Note that, at least one Lagrange multiplier is non-zero for a KKT pair, as rf must be non-zero at optimality. Therefore we can always identify at least one bottleneck link using Lagrange multipliers. Following Corollary 3 and Lemma 2, we can further identify a set of bottleneck links. However, it is worth noting that we cannot identify all bottleneck links using an arbitrary Lagrange multiplier, since it is possible that a bottleneck link has zero Lagrange multipliers. In fact, identifying all bottleneck links is in general a di±cult problem. We will discuss this in more detail in Section 5. 4.2 Solution Approach To attain lexicographic max-min fairness, we adopt the following procedure: 1. For a given wireless Aloha network G = (N; E), compute the directed link graph GL = (VL; EL), and construct the component graph GL = (VL; EL) for the directed link graph; 2. Set k = 0, Gk = GL, Vk = VL, and Ek = EL; 73. Solve the transformed convex program of (2) for all links that belong to C(Vk), max y s.t. y · log(xij (p)); 8(i; j) 2 C(Vk); p 2 Pf : (6) Denote the max-min fair rate as x ¤ k = e y ¤ , where y ¤ is the optimal solution for (6); 4. Find at least one bottleneck link using the Lagrange multipliers, and ¯nd the correspond- ing vertex v in the component graph; 5. Set Uk = Pv. The lexicographic max-min rate for the links in C(Uk) is x ¤ k ; 6. Fix the link attempt probabilities for all the links that belong to C(Uk), and set Vk+1 = Vk n Uk; 7. If Vk+1 is nonempty, we construct the subgraph of Gk for Vk+1 and denote it as Gk+1, i.e. Gk+1 = GVk+1 = (Vk+1; EVk+1 ), increment k by 1, and go to step 3; 8. Terminate if Vk+1 is empty. Intuitively, the above procedure repeatedly solves the problem (6), which maximizes the next minimum rate in the network. Note that, at the optimum of (6), we can identify at least one bottleneck using Lagrange multipliers. By following Corollary 3 and Lemma 2, we can furthermore identify one or more strongly connected components (in the directed link graph) that consist only of bottleneck links. We ¯x the attempt probabilities for those bottleneck links identi¯ed, and go to the next step, i.e. solving (6) again for the rest of the links in the network. Note that the rate on a link, once identi¯ed as a bottleneck link, remain unchanged in the later steps, and the procedure is repeated until all links have been identi¯ed as bottleneck links (at di®erent step k) and their attempt probabilities have been ¯xed. The following theorem states that the above procedure converges to a lexicographic max- min fair rate allocation. Theorem 1 (Proof in the appendix) Denote x ¤ and p ¤ as the vector of link rates and link attempt probabilities attained by the procedure 1) to 8) given above. Then x ¤ is the lexicographic max-min fair rate allocation, and p ¤ is the link attempt probabilities that makes x ¤ feasible, i.e. x ¤ ij · xij (p ¤ ) for any link (i; j). In addition we can show that the optimal solution of x ¤ and p ¤ are unique, and x ¤ ij = xij (p ¤ ) for any link (i; j). It is worth noting that the procedure above can be implemented in a distributed manner. The construction of a strongly connected component is realized if each vertex in the directed link graph ¯nds the set of vertices that it has a path to, and the set of vertices who have a path to it, and hence can be achieved through a distributed path-search algorithm. Also, (6) can be solved in a distributed manner to obtain both primal variables and Lagrange multipliers (refer [7] for details). Also note that the procedure only identify a subset of bottleneck links at each step. In the worst case it might identify only one strongly connected component at a step, and solve (6) repeatedly more than necessary. This motivates a solution approach based on identifying all bottleneck links discussed in Section 5. 85 Algorithm Based on Identifying All Bottleneck Links 5.1 Bottleneck Links, Maximally Complementary Solutions, and the Inte- rior Point Methods The concept of a bottleneck link is closely tied to that of a maximally complementary solution [8, 9] of a monotone nonlinear complementarity problem (NCP) [5] derived from the KKT conditions of a convex program satisfying a suitable CQ. In order to de¯ne the corresponding concept of maximal complementarity for the KKT system (5), we introduce three basic sets, ®(z; ¸) = f(i; j) : ¸ij > 0 = hij (z) g, ¯(z; ¸) = f(i; j) : ¸ij = 0 = hij (z) g, and °(z; ¸) = f(i; j) : ¸ij = 0 > hij (z) g, associated with every KKT pair (z; ¸) satisfying (5). A KKT pair (bz;¸b) is said to be maximally complementary if the index set ®(bz;¸b)[°(bz; ¸b) is maximal among all KKT pairs; i.e., if (z; ¸) is another KKT pair such that ®(bz;¸b) [ °(bz; ¸b) µ ®(z; ¸) [ °(z; ¸), then equality holds in the above inclusion. It is not di±cult to show (see [9, page 627]) that if (bz; ¸b) is a maximally complementary solution, then ®(bz;¸b) and °(bz; ¸b) are respectively maximal as two separate sets among all the KKT pairs. An important fact is that the respective index sets ®(bz;¸b), ¯(bz; ¸b), and °(bz;¸b) are the same among all maximally complementary KKT pairs. This fact is easy to prove using the convexity of the solutions to the KKT system. Therefore we can label the common sets as ®b, ¯b, and °b, respectively. By de¯nition, link (i; j) 2 L is a bottleneck link if its capacity constraint hij is satis¯ed as an equality by all optimal solutions of (3). Let LB ½ L denote the set of all bottleneck links, and LNB ½ L denote the set of all non-bottleneck links. Obviously L = LB S LNB. The main connection between a bottleneck link and the maximally complementary solution is described in the result below. Lemma 4 (Proof in the appendix) Link (i; j) is a bottleneck link, i.e. hij (z) is satis¯ed as an equality by all optimal solutions, if and only if (i; j) 2 ®b S ¯b, i.e. LB = ®b S ¯b and LNB = °b. This relation of maximally complementarity and bottleneck links can be illustrated by considering the case below. Suppose we have three links in an Aloha network, namely link 1, link 2, and link 3. The network is setup in a way such that x1(p) = p1(1 ¡ p2), x2(p) = p2(1 ¡ p1), and x3(p) = p3(1 ¡ p2). The max-min fair rate problem is therefore formulated as follows: max x; s.t. x · p1(1 ¡ p2); x · p2(1 ¡ p1); x · p3(1 ¡ p2): (7) Obviously the optimization solution is x ¤ = 0:25 when p1 = 0:5, p2 = 0:5, and 0:5 · p3 · 1. Note that only link 1 and link 2 are bottleneck links, i.e. B = f1; 2g. Also note that ®b = f1; 2g, ¯b = Á, and °b = f3g in this case. Therefore we have LB = ®b S ¯b and LNB = °b. The signi¯cance of Lemma 4 is that the index sets ®b and ¯b can be obtained by an interior- point algorithm (see [9], [5, Chapter 11]) applied to the KKT formulation (5). Nevertheless, 9enjoying both polynomial complexity and local quadratic convergence, such an interior-point algorithm cannot easily be implemented in a distributed manner. 5.2 The Barrier Method 5.2.1 Identifying All the Bottleneck Links Using the Barrier Method In this section, we discuss how to identify all the bottleneck links using the barrier method. For the convex program of the max-min fair rate optimization, (3), the barrier problem is formulated below. min µ(¹); s.t. ¹ ¸ 0; (8) where µ(¹) = inff¡f(z) + ¹B(z) : hij (z) < 0; 8(i; j) 2 L; p 2 Pf g. Here B is the barrier function that is nonnegative and continuous over the region fz : hij (p) < 0; p 2 Pf g, and approaches 1 as the boundary of the region fz : hij (p) < 0; p 2 Pf g is approached from the interior. More speci¯cally, the barrier function B is de¯ned by B(z) = X (i;j)2L Á [hij (z)] ; where Á is a function of one variable that is continuous over fs : s < 0g and satis¯es: Á(s) ¸ 0 if s < 0 and lim s!0¡ Á(s) = 1 One typical Á(s) is Á(s) = ¡1=s, and another is Á(s) = log(¡s). We refer to the function ¡f(z) + ¹B(z) as the auxiliary function. Intuitively, when solving the max-min fair rate optimization problem (3) using the barrier method, a very large penalty (the barrier function) is added in the objective to convert the originally constrained optimization problem into an unconstrained optimization problem (8). Lemma ?? (see the statement of the lemma in the appendix) ensure that for any positive ¹, there exists z¹ so that µ(¹) = ¡f(z¹) + ¹B(z¹), and that the limit of any convergent subsequence of fz¹g is an optimal solution to the primal problem (3) when ¹ approaches to zero, and therefore ensure the validity of the barrier method. It is worth noting that, when searching for the optimal solution of µ(¹) at any given positive ¹, the solution tries to stay away from the boundaries as a solution close to the boundary always incurs a very large penalty on the objective. In this manner, the constraints are active only for those bottleneck links. For non-bottleneck links that can be inactive at some optimal solutions, the constraint will not be active. Therefore, the optimal solution given by the barrier method naturally divides all the links into the set of bottleneck links and the set of non-bottleneck links. We make this argument rigorous in the following theorem. Theorem 2 (Proof in the appendix) Suppose Á(s) satis¯es that the auxiliary function f(z)+ ¹B(z) is strictly convex on z. If the limit point of a convergent subsequence of fz¹g is denoted by z ¤ , then hij (z ¤ ) = 0 for all (i; j) 2 LB and hij (z ¤ ) < 0 for all (i; j) 2 LNB, where LB and LNB denote the set of bottleneck links and the set of non-bottleneck links respectively. 105.2.2 Distributed Implementation To provide lexicographic max-min fairness in a distributed manner, we need to solve the max-min fair rate problem (8) in a distributed manner. In [7] it has been shown that (3) is equivalent to the convex problem below, min P (i;j)2L y 2 ij ; s.t. yij ¡ log (xij (p)) · 0; 8(i; j)2L; yij · yst ; 8(i; j) 2 L;(s; t) 2 L(i; j); p 2 Pf : (9) where yij is the logarithmic value of the max-min fair rate on link (i; j). L(i; j) is de¯ned as the neighboring links of link (i; j) in the directed link graph, i.e. L(e) = fe^ : (e; e^) 2 EL or (^e; e) 2 EL; e^ 2 VL)g for e = (i; j) 2 VL. Intuitively, (9) introduces yij for each link (i; j) and forces them to be equal (in the second constraint) so that the originally centralized problem can be solved in a distributed manner. It applies logarithmic transformation to each capacity constraint to make it a convex set. The objective function is rewritten as min P y 2 ij since yij · 0 for any link (i; j) and we want to make yij maximized (hence its square should be minimized). Based on the convex program (9), we construct a barrier problem for ¹ > 0 as below, min P (i;j)2L y 2 ij + ¹ P (i;j)2L 1 log (xij (p)) ¡ yij ; s.t. yij · yst ; 8(i; j) 2 L;(s; t) 2 L(i; j); p 2 Pf : (10) Note that (10) actually transfers the capacity constraints to the objective by using the barrier function Á(s) = ¡1=s. From Lemma 6 (refer to the appendix), (10) converges to the optimal solution of (9) when ¹ ! 0. According to the scalar composition [2], for Á : R ! R and g : Rn ! R, Á ± g is convex if Á is convex and nondecreasing, and g is convex. For the barrier problem considered in (10), Á(s) = ¡1=s and gij (y; p) = yij ¡ log (xij (p)) for link (i; j), where y = (yij : (i; j) 2 L). Therefore the barrier function B(y; p) de¯ned as B(y; p) = X (i;j)2L 1 log (xij (p)) ¡ yij is a convex function on (y; p). Therefore (10) is a convex program, and can be solved in a distributed manner. We now present a distributed algorithm to solve (10) iteratively. Let p (n) ij and y (n) ij denote the attempt probability on link (i; j) and the logarithmic value of the link rate at the nth iteration respectively. De¯ne the \link relation indicator" for link (i; j) and its neighboring link (s; t), º (n) ij;st , as º (n) ij;st = ½ 0 when uij · ust ; 1 when uij > ust : (11) 11Let · be a positive constant, and °n be the step size at the nth iteration. The logarithmic value of rate on link (i; j), yij , is updated as y (n+1) ij = y (n) ij ¡ ° 0 @2y (n) ij + ¹ @B @yij + · X (s;t)2L(i;j) ³ º (n) ij;st ¡ º (n) st;ij ´ 1 A; (12) and the attempt probability on link (i; j), pij , is updated as p (n+1) ij = p (n) ij ¡ °¹ @B @pij : (13) Following the procedure based on the subgradient method [3], we can show that the max- min rates (the exponential of yij for link (i; j)) and link attempt probabilities converge to a neighborhood around the optimum value, and the size of the neighborhood becomes arbitrarily small with decreasing stepsize. 5.3 Solution Approach If we let C(Uk) be all the bottleneck links identi¯ed when we repeatedly solve (3) for the kth time, not only the procedure given in Section 4.2 applies here, but also Theorem 1 on the convergence holds true. However, there is some considerable di®erence between these two procedures that is worth noting here. Since the procedure given above can identify all bottleneck links every time when (3) is solved, (3) will be solved for the least number of times and hence greatly lower the computation cost. Also, in the barrier method, identi¯cation of the set of bottleneck links does not require any information on the structure of the component graph, and this can greatly reduce the complexity in the distributed implementation. This simplicity in computation/implementation comes at a cost: note that in practice, the barrier method would only yield approximate solutions, since its solution converges to the optimum only when ¹ approaches zero. However, the solution provided by the barrier method will become closer to the optimum when ¹ is decreased, and can be arbitrarily close to the optimum by making ¹ su±ciently small. 6 Conclusion In this paper, we address the problem of providing lexicographic max-min fair rate allo- cations at the link layer in a wireless ad hoc network with random access. We propose two e±cient approaches which attain the globally optimal solutions and are amenable to distributed implementation. Our algorithms are based on solving the problem of maximizing the minimum rate repeat- edly, and identifying bottleneck links at each iteration. In this respect, our approaches share an intuitive similarity with the well-known bottleneck-based algorithm for computing the lex- icographic max-min fair rates in a wired network [1, Chapter 6]. However, the lexicographic max-min fair rate allocation problem in our context is signi¯cantly more complex than that for 12wired networks. In particular, whereas the problem constraints in our case are non-linear, non- convex and non-separable, the corresponding link capacity constraints in a wired network are linear. Naturally, the notion of a bottleneck link, and the proof of optimality of our approaches are considerably more involved than their counterparts for wired networks. Finally, note that the bottleneck-based algorithm in [1, Chapter 6] considers multi-hop end-to-end sessions in a wired network, as is therefore designed to attain fair rates at the level of the transport layer (the corresponding problem at the link layer, where we we need to consider single-hop connec- tions, is trivial to solve). In contrast, the approaches in our paper are applicable at the level of the link layer, where only single-hop connections need to be considered. The question of attaining lexicographic max-min fair rates for end-to-end multi-hop wireless sessions remains open for future investigation. Appendix A Proof of Lemma 1 Proof: First we show that, if there is an edge from (i; j) to (s; t) in a directed link graph, we have x ¤ ij · x ¤ st . By the de¯nition of an edge in the directed link graph, it must be one of the following two cases to have an edge from (i; j) to (s; t), 1. link (i; j) and (s; t) have the same source nodes; 2. i is either the receiver t or a neighboring node of receiver t for link (s; t). We then show that in either of the above cases, x ¤ ij · x ¤ st . Assume that x ¤ ij > x ¤ st , and at the lexicographic max-min fairness the corresponding at- tempt probabilities are p ¤ uv for (u; v) 2 E. In case 1), i and s are the same node. Obviously we can ¯nd ± > 0, and de¯ne p 0 ij = p ¤ ij ¡±, p 0 st = p ¤ st + ±, and p 0 uv = p ¤ uv such that x 0 ij = p 0 ij (1 ¡ P 0 j ) Y k2Kjnfig (1 ¡ P 0 k ) = p 0 ij (1 ¡ P ¤ j ) Y k2Kjnfig (1 ¡ P ¤ k ) x 0 st = p 0 st (1 ¡ P 0 t ) Y k2Ktnfsg (1 ¡ P 0 k ) = p 0 st (1 ¡ P ¤ t ) Y k2Ktnfsg (1 ¡ P ¤ k ) It is worth noting that rates of all other links except (i; j) and (s; t) remain unchanged. Since rate of (s; t) is increased while rate of all the links whose rate is smaller than x ¤ st remains unchanged, this contradicts the fact that x ¤ st is the lexicographic max-min fair rate. In case 2), i is either the receiver of link (s; t) or a neighboring node of node t. We can ¯nd ± > 0, and de¯ne p 0 ij = p ¤ ij ¡ ± and p 0 uv = p ¤ uv otherwise, such that x ¤ st < x 0 st < x 0 ij < x ¤ ij . Note that when changing from p ¤ to p 0 , rates of all links except (i; j) are non-decreased. Since rate of (s; t) is increased while rate of all the links whose rate is smaller than x ¤ st is not decreased, this contradicts the fact that x ¤ st is the lexicographic max-min fair rate. 13We can then conclude that in either case 1) or 2), there is contradiction. Therefore if x ¤ is the vector of lexicographic max-min fair rates, then x ¤ ij · x ¤ st . If (i; j) ; (s; t) in the directed link graph, we can denote the links along the path as (u1; v1), (u2; v2), ..., (un¡1; vn¡1), (un; vn). Since there is an edge from (i; j) to (u1; v1), x ¤ ij · x ¤ u1v1 . Similarly, we have x ¤ u1v1 · x ¤ u2v2 , ..., x ¤ un¡1vn¡1 · x ¤ unvn , and x ¤ unvn · x ¤ st . Therefore x ¤ ij · x ¤ st . This completes the proof. B Proof of Lemma 2 Proof: Denote the max-min fair rate for (2) as x ¤ . For any link r in the directed link graph, denote its lexicographic max-min rate as x ¤ r . Obviously x ¤ r ¸ x ¤ , as x ¤ won't be the max-min rate for (2) otherwise. Since r 2 C(Pv), r ; l in the directed link graph. According to Lemma 1, x ¤ r · x ¤ l , where x ¤ r and x ¤ l are the lexicographic max-min rates for link r and link l respectively. Therefore the lexicographic max-min fair rates for link r and link l must be equal. Since l is a bottleneck link, link r is also a bottleneck link. C Proof Outline of Theorem 1 Proof: First, we show that for the bottleneck links in C(Uk), their rates will remain ¯xed in any steps l > k. By the de¯nition of a directed link graph, if the rate of link (i; j) depends on the attempt probability of link (s; t), then (s; t) ; (i; j) in the directed link graph, i.e., the rate of a link (i; j) depends on the attempt probability of link (s; t) only if (s; t) is a predecessor of link (i; j). From Lemma 2, Pv ½ S l=0;1;:::;k Ul for any v 2 Uk. Since all links that belong to C( S l=0;1;:::;k Ul) have ¯xed link attempt probabilities for iteration l > k, the rates for the links that belong to C(Uk) will remain ¯xed. We now show that attempt probabilities of the links in C(Uk), solved from (6), are unique. Lemma 5 Consider the max-min fair rate problem max y; s.t. y · log(xij (p)); 8(i; j) 2 C(Vk); p 2 Pf : At the optimal solution, attempt probabilities of the links in C(Uk) are unique. Proof: If l1 2 C(Vk n Uk) and l2 2 C(Uk), we have x ¤ l1 ¸ x ¤ l2 since l2 ; l1 in the directed link graph, where x ¤ l1 and x ¤ l2 denote the lexicographic max-min fair rate for link l1 and l2 respectively. Therefore we can remove the constraints that correspond to the links in C(Vk n Uk) and obtain the following equivalent problem. max y; s.t. y · log(xij (p)); 8(i; j) 2 C(Uk); p 2 Pf : (14) As all the links in C(Uk) are bottleneck links, all the constraints in (14) are active at the optimum. The max-min rate x and the attempt probabilities for the links in C(Uk) will be decided in (14). 14Denote p Uk = (pij : (i; j) 2 C(Uk)). Note that if (i; j) 2 C(Uk), then xij (p) only depends on p Ul , where l = 1; :::; l ¡ 1. Note that p Ul is ¯xed for l < k, we can write xij (p Uk ). Denote the optimal value of (14), as y ¤ . Assume that both p Uk 1 and p Uk 2 are both optimal solutions. Since all constraints of (14) are active at the optimum, we have y ¤ = log(xij (p Uk 1 )) and y ¤ = log(xij (p Uk 2 )). Denote p Uk ¸ = ¸p Uk 1 +(1¡¸)p Uk 2 , for 0 < ¸ < 1. Since log(xij (p Uk ) is strictly concave on p Uk , for any (i; j) 2 C(Uk) we have log(xij (p Uk ¸ ) = log(xij (¸p Uk 1 +(1¡¸)p Uk 2 ) > ¸y ¤+(1¡¸)y ¤ = y ¤ . This contradicts with the fact that y ¤ is the optimal value for (14). Therefore (14) has a unique solution. From Lemma 2, we conclude that all links in C(Uk) are bottleneck links, and their max-min fair rate is x ¤ k . We have also shown that the max-min rate for those bottleneck links will remain ¯xed in the later steps. From Corollary 2 and Lemma 2 we can easily see that x ¤ 0 · x ¤ 1 · x ¤ 2 · ::: · x ¤ n for step 1 to step n. We now show that, if the algorithm terminates at the nth step, x ¤ 0 · x ¤ 1 · x ¤ 2 · ::: · x ¤ n is the lexicographic max-min fair rate. From the above discussion, it can be seen that our procedure ¯rst maximizes the minimum rate in the network, and then maximizes the rate that's second to the minimum, and so on. Therefore the procedure guarantees that the rate of any link cannot be increased without decreasing the rate of a link that has smaller rate. Also, we have shown that for each step, the attempt probabilities solved from (14) is unique, and hence there is no possibilities that the rate of a link is increased while the rates of the links, who have smaller rates, remain unchanged. Therefore our procedure gives the lexicographic max-min fair rate. D Proof of Lemma 4 Proof: If (i; j) is a bottleneck link, then it is clear that (i; j) 62 °b by the cross complemen- tarity property. Conversely, suppose that (i; j) is not a bottleneck link. Then there exists an op- timal solution ez of (3) such that hij (ez) < 0. Let ¸e be any KKT multiplier corresponding to ez and let (bz; ¸b) be a maximally complementary KKT pair. The pair (z 1=2 ; ¸ 1=2 ) ´ 1 2 (bz; ¸b) + 1 2 (ez; ¸e) is also a KKT pair, by the convexity of the solution set of the KKT system. Clearly, we have °b = °(bz; ¸b) µ °(z 1=2 ; ¸ 1=2 ). The maximality of °b implies that (i; j) 2 °(z 1=2 ;¸ 1=2 ) = °b. E Proof of Theorem 2 Proof: The following lemma ensures the validity of using barrier functions for solving a constrained problem by converting them into a single unconstrained problem or into a sequence of unconstrained problems. Lemma 6 The following statements hold for the barrier method applied to our problem: 1. For each ¹ > 0 there exists an z¹ 2 Z with g(z¹) < 0 such that µ(¹) = f(z¹) + ¹B(z¹) = infff(z) + ¹B(z) : g(z) < 0; z 2 Zg; 152. infff(z) : g(z) · 0; z 2 Zg · inffµ(¹) : ¹ > 0g 3. For ¹ > 0, f(z) and µ(¹) are nondecreasing functions of ¹, and B(z¹) is a non-increasing function of ¹, 4. minff(z) : g(z) · 0; z 2 Zg = lim¹!0¡ µ(¹) = inf¹>0 µ(¹), 5. The limit of any convergent subsequence of fz¹g, at least one of which must exist, is an optimal solution to the primal problem, and furthermore ¹B(z¹) ! 0 as ¹ ! 0 + . Proofs of Lemma 6 follows from standard results for the barrier method [4]. Let A¹(y; p) denote the auxiliary function for the barrier problem (8) at the given ¹ > 0, i.e. A¹(y; p) = ¡y + ¹ X (i;j)2L Á(hij (y; p)); (15) and let (y ¤ ; p ¤ ) denote the limit point of a convergent of subsequence of fz¹g = f(y¹; p¹)g. Since Á is assumed to make A¹(y; p) a strictly convex function, (y ¤ ; p ¤ ) is an optimal solution to (3) from Lemma 6. We write p = (p B ; p NB ), where p B is the vector of attempt probabilities for bottleneck links, and p NB is the vector of attempt probabilities for non-bottleneck links. Since the limit point (y ¤ ; p ¤ ) is an optimal solution to (3), by de¯nition of a bottleneck link we have hij (y ¤ ; p ¤ ) = 0; 8(i; j) 2 LB: (16) According to the de¯nition of a non-bottleneck link, there exists an p~ such that for any (i; j) 2 LNB, we have hij (y ¤ ; p~) < 0: Therefore there exists an ³ > 0 such that hij (y ¤ ; p~) < ¡³; 8(i; j) 2 LNB: (17) We now show that for any convergent subsequence f(y¹; p¹)g and for any ² > 0, there exists ± 1 ² such that for any 0 < ¹ < ± 1 ² , there exists (y¹; p~¹) that satis¯es for any (i; j) 2 LNB we have hij (y¹; p~¹) < ¡³ + ²; (18) where jLNBj denotes the size of the set LNB, i.e. the number of non-bottleneck links. Noting that p B¤ is unique from Lemma 5, we conclude that the convergent subsequence f(y¹; p¹)g satis¯es that fp B ¹ g converges to p~ B = p B¤ . We construct p~¹ = (p B ¹ ; p~ NB ), and therefore for any ² > 0 there exists ± 1 ² > 0, such that for any 0 < ¹ < ± 1 ² and any (i; j) 2 LNB we have jy¹ ¡ y ¤ j < 0:5²; jlog(xij (p~)) ¡ log(xij (p~¹))j < 0:5²: 16Therefore we have jhij (y¹; p~¹)j = jy¹ ¡ log(xij (p~¹))j = jy¹ ¡ y ¤ + y ¤ ¡ log(xij (p~)) + log(xij (p~)) ¡ log(xij (p~¹))j ¸ jy ¤ ¡ log(xij (p~))j ¡ jy¹ ¡ y ¤ j ¡ jlog(xij (p~))¡log(xij (p~¹))j > ³ ¡ ²: Since hij (y¹; p~¹) < 0, it follows that hij (y¹; p~¹) < ¡³ + ²; 8(i; j) 2 LNB: We then show that for a convergent subsequence f(y¹; p¹)g, the limit point (y0; p0) must satisfy that hij (y0; p0) < 0. We prove this result by contradiction. Suppose that for link l 2 LNB, hl(y0; p0) = 0. Therefore for any ² > 0, we can ¯nd ± 2 ² such that for any 0 < ¹ < ± 2 ² , it holds ¡² < hl(y¹; p¹) < 0. From the previous discussion, we see that when 0 < ¹ < ± 1 ² , we can ¯nd (y¹; p~¹) such that hij (y¹; p~¹) < ¡³ + ² for all (i; j) 2 LNB. Since Á(s) approaches in¯nity when s approaches 0 where s < 0, there exists ²1 > 0 such that for any 0 < ² < ²1, we have Á(¡²) > jLNBjÁ(¡0:5³). We then de¯ne ²2 = 0:5³. Denote ²0 = minf²1; ²2g, and denote ±²0 = minf± 1 ²0 ; ± 2 ²0 g. Noting that for a bottleneck link (i; j), hij only depends on p B , and that p~¹ = (p B ¹ ; p~ NB ), we have hij (p~¹) = hij (p¹) for any link (i; j) 2 LB. Therefore, for any 0 < ¹ < ±²0 we then have A¹(y¹; p¹) ¡ A¹(y¹; p~¹) = " ¡y¹ + ¹ P (i;j)2L Á(hij (y¹; p¹)) # ¡ " ¡y¹ + ¹ P (i;j)2L Á(hij (y¹; p~¹)) # = ¹ P (i;j)2LB [Á(hij (y¹; p¹)) ¡ Á(hij (y¹; p~¹))] +¹ P (i;j)2LNB [Á(hij (y¹; p¹)) ¡ Á(hij (y¹; p~¹))] = ¹ P (i;j)2LNB [Á(hij (y¹; p¹)) ¡ Á(hij (y¹; p~¹))] ¸ ¹ " Á(hl(y¹; p¹)) ¡ P (i;j)2LNB Á(hij (y¹; p~¹)) # ¸ ¹(Á(¡²0) ¡ jLNBÁ(¡0:5³)j) > 0: This contradicts with the fact that (y¹; p¹) is the optimal solution to A¹(y; p). Therefore the assumption that hl(y0; p0) = 0 for link l 2 LNB is incorrect, and we conclude that hl(y0; p0) < 0 for any link l 2 LNB. References [1] D. Bertsekas and R. Gallagher, Data Networks, Prentice Hall, 1992. [2] S. Boyd, L. Vandenberghe, Convex Optimization, Cambridge University Press, 2003. 17[3] N. Z. Shor, Minimization Methods for Non-di®erentiable Functions, Springer-Verlag, 1985. [4] M. S. Bazaraa, H. D. Sherali, C. M. Shetty, Nonlinear Programming: Theory and Algo- rithms, Wiley, 1992. [5] F. Facchinei and J.S. Pang, Finite-Dimensional Variational Inequalities and Complemen- tarity Problems, Springer-Verlag, 2003. [6] T. H. Cormen, C. E. Leiserson, R. L. Rivest, C. Stein, Introduction to Algorithms, MIT Press, 2001. [7] X. Wang, K. Kar, \Distributed Algorithms for Max-Min Fair Rate Allocation in ALOHA Networks," in Annual Proceedings of Allerton, Illinois, Octorber 2004. [8] O. GÄuler and Y. Ye, \Convergence behavior of interior-point algorithms," Mathematical Programming 60 (1993) 215{228. [9] F. A. Potra and Y. Ye, \Interior-point methods for nonlinear complementarity problem," Journal of Optimization Theory and Methods, 88 (1996) 617{642. 18


Related questions

What is the difference between rip v1 rip v2?

Particulars RIP V1 RIP V2 VLSM Support NO Route Propogation Mechanism Broadcast (255.255.255.255 ) Multicast (224.0.0.9) Authentication Mechanism No Yes ( Text & MD5 )----RIP V1-----> Classful routing protocol.RIP V2-----> Classless routing protocol.-----------------------------------------------------RIP V1------> Subnet masks are NOT included in the routing update.RIP V2------> Subnet masks are included in the routing update.-----------------------------------------------------RIP V2 is actually an enhancement of RIP V1's features and extensions raether than an entirely new protocol.


What is the equation for the percentage difference of 20 and 31?

( | V1 - V2 | / ((V1 + V2)/2) ) * 100


Does rip v2 support route summarization?

RIP v2 supports subnets of different sizes. So I guess it would be the router that can, or can not, be configured to summarize routes. (This would not be possible in RIP v1; the size of the subnet is implicit, it is not sent with the routing updates.)RIP v2 supports subnets of different sizes. So I guess it would be the router that can, or can not, be configured to summarize routes. (This would not be possible in RIP v1; the size of the subnet is implicit, it is not sent with the routing updates.)RIP v2 supports subnets of different sizes. So I guess it would be the router that can, or can not, be configured to summarize routes. (This would not be possible in RIP v1; the size of the subnet is implicit, it is not sent with the routing updates.)RIP v2 supports subnets of different sizes. So I guess it would be the router that can, or can not, be configured to summarize routes. (This would not be possible in RIP v1; the size of the subnet is implicit, it is not sent with the routing updates.)


What is the advantage of a system with more than one type of output?

The most common multi output systems are used for getting differential output. i.e., if V1 and V2 are the 2 outputs, then usually the difference, V2-V1 or V1-V2 is used.


In the acceleration equation what do v1 and v2 represent?

v1 = initial velocity v2 = final velocity


What is the v1 v2 and rotate speed on a dc3?

v1 is design speed and v2 rotation speed


What is the formula for percentage change?

[ ((v2 - v1) / |v1|) * 100 ]


What is the formula for potential difference in series circuit?

formula for determining potential difference


Ga law the describes relationship between volume and temperature is?

Charles' Law: V1/T1 = V2/T2 or V2/V1 = T2/T1 This signifies a direct relationship between temperature and volume when all other variables are held constant.


Charles Law formula?

The Charles' Law formula is V1/T1 = V2/T2, where V is volume and T is temperature. This formula describes the relationship between the volume of a gas and its temperature at constant pressure.


If acceleration=0, than according to "d=v2*t-1/2a*t^2" changed to get v2 is "v2=d+1/2a*t^2/t", but with acceleration=0, would v2=d V2=end velocity, d=distance, a=acceleration, t=time?

The equations of motion that relate velocity, distance, time and acceleration for the specific case of "constant acceleration" can be written as follow, acceleration a = (v2 - v1)/t from which v2 = v1 + at The distance covered during t time d = vav x t, where vav refers to average velocity in the process from v1 to v2. For the case of constant acceleration vav = (v1 + v2)/2. Substituting in d we get d = (v1 + v2)/2 x t from which, v2 = 2d/t - v1 If we take the constant acceleration to be zero, a = 0, you can see that the second equation we wrote becomes, v2 = v1 (There is no acceleration), so our equation for the distance d becomes, d = v1 x t = v2 x t


What is the minimum numbers of unequal vectors to result into a null vector?

Two vectors; V1 + V2=0 where V1= -V2, two opposite vectors.