1 / 20

# Arkadij Zakrevskij United Institute of Informatics Problems of NAS of Belarus

A NEW ALGORITHM TO SOLVE OVERDEFINED SYSTEMS OF LINEAR LOGICAL EQUATIONS. Arkadij Zakrevskij United Institute of Informatics Problems of NAS of Belarus. Outline. How the problem is stated How the problem can be solved Theoretical background Example Solving the core equation Experiments

Télécharger la présentation

## Arkadij Zakrevskij United Institute of Informatics Problems of NAS of Belarus

E N D

### Presentation Transcript

1. A NEW ALGORITHM TO SOLVE OVERDEFINED SYSTEMS OF LINEAR LOGICAL EQUATIONS Arkadij Zakrevskij United Institute of Informatics Problems of NAS of Belarus

2. Outline • How the problem is stated • How the problem can be solved • Theoretical background • Example • Solving the core equation • Experiments • Results of experiments

3. How the problem is stated A system of m linear logical equations (SLLE) with n Boolean variables : a11x1 a12x2 … a1nxn = y1 , a21x1 a22x2 … a2nxn = y2 , … am1x1 am2x2 … amnxn = ym .

4. How the problem is stated Any SLLE can be presented by equation Ax = y, A – the matrix of coefficients, x – the vector of unknowns, y – the vector of free members, all Boolean. Usually A and y are given, the problem is to find a root - a value of vector x satisfying the equation Ax = y. An SLLE could be • defined (having one root), usually m = n, • undefined (underdefined, several roots), usually m<n, • overdefined (inconsistent, contradictory - no root), usually m > n.

5. How the problem is stated Finding optimal solutions. Looking for a shortest root inundefined SLLE: Ay 1 0 0 1 1 0 1 1 0 1 1 0 1 1 1 0 0 1 0 1 0 0 1 1 0 0 1 1 0 0 1 0 0 0 1 1 1 1 0 0 0 1 0 1 0 0 0 1 0 1 1 1 0 1 1 1 0 0 0 1 0 0 0 0 1 = xT – the shortest root

6. How the problem is stated Satisfying maximum number of equations in overdefined SLLE A yey* 11.... 0 + 0 0 .1.111 1 not satisfied10 .11..1 0 + 0 0 1.111. 0 + 0 0 11111. 0 + 0 0 ..1.1. 0 not satisfied11 .1.... 0 + 0 0 .111.1 1 + 0 1 ..1... 0 + 0 0 .11.1. 1 + 0 1 1.1... 0 + 0 0 ...1.1 1 + 0 1 000110 x* - anoptimal solution

7. How the problem is stated Let m > n, and all columns of matrix A are linear-independent. Then system Ax = y is consistent for 2nvaluesof vector y (called suitable)from 2m possible values. Suppose a suitable vector y* is distorted to y = y* e, where e is a distortion vector. The problem is to restore vector y* (or e) for given A and y. When y is not too far from y*, that problem can be solved by finding a suitable value y”, the nearest to y. Then y”=y*.

8. How the problem can be solved Matrix A generates a linear vector spaceM, consisting of all different sums (modulo 2) of columns from A. Equation Ax = y is consistent (and y is suitable) iff y M. The problem is to calculate the vector distanced (A, y) between vector space M and vector y. It could be regarded as the distortion vector e if its weight w(e) (the number of 1s) is smaller then  the averaged shortest Hamming distance between elements in M. Vector e can be regarded as well as the correction vector.

9. How the problem can be solved The value of  is defined by inequality (m, n, ) < 1 (m, n, + 1), where (m, n, k) is the expected number of suitable values of vector y with weight k in a random SLLE with parameters m and n: k (m, n, k) =  Cmi 2n-m. i = 0 -

10. Theoretical background Changing some column aiof matrix A for its sum with another column aj we obtain some matrix A+ equivalent to initial one (generating the same linear vector space M) Affirmation 1. Vector distance d (A+, y) = d (A, y). Changing vector y in system (A, y) for its sum with arbitrary column aj from matrix A we obtain z+. Affirmation 2. Vector distance d (A, y+) = d (A, y).

11. Theoretical background Using introduced operations, we canonize system (A, y): 1) select n linearly independent rows in matrix A; 2) in every of them delete all 1s except one (put into position i for i-th of the selected rows); 3) delete 1s in corresponding components of vector z. After that the obtained system (A+, y+) is reduced in size: 4) all selected rows are deleted from matrix A+, as well as the corresponding components of vector y. The remaining rows and components constitute Boolean ((m – n) n)-matrix B and (m – n)-vector u.

12. Theoretical background Affirmation 3. The task of restoration (finding vector d (A, y)) is reduced to solving the core equationBx =u): To find a column subset C in matrix B, which minimizes the arithmetic sum w(c) + w(s). In that case d (A, y) = (c, s), the concatenation of vectors c and s. c - the Boolean n-vector indicating columns from B entering C; w(c) - the number of 1s in c; (C) - the mod 2 sum of columns in C; s = (C) u; w(s) - the number of 1s in s.

13. Example A y b A+ y+ B u 1 0 0 0 0 1 1 0 0 0 0 0 1 1 0 1 1 0 1 1 1 1 0 1 0 0 0 1 1 1 0 0 1 1 0 0 0 1 0 0 1 0 0 1 0 0 0 1 0 1 1 1 0 0 0 1 1 0 1 0 1 0 1 0 1 1 1 1 1 0 1 1 1 0 0 1 0 1 0 0 0 1 0 1 0 1 0 0 0 1 0  1 0 0 1 0 1 0 0 0 1 0 1 0 0 0 1 1 0 1 0 1 1 1 1 0 1 0 0 1 0 1 0 0 1 1 1 1 0 1 0 0 0 0 1 0 1 0 0 1 0 0 1 0 1 1 0 1 0 0 1 0 0 1 0 1 1 0 1 1 0 1 0 0 1 0 1 0 1 0 1 0 0 0 1 1 0 0 1 0 0 0 0 1 1 1 1 1 1 0 1 0 0 1 0 0 1 0 0 1 1 0 0 0 1 1 0 1 1 0 1 0 1 1 0 0 0 0 1 1

14. Example Restoring the initial system: 1. Solving system B x = u, i. e. finding a value c of x which minimizes function (Bx  u) + w (x). 2. Obtaining d (A, y) = (c, Bc u), which could be accepted as distortion (correction) vector e . 3. Calculating, if needed, suitable vector y* = ye, then solving consistent system A x = y* and finding x*.

15. Example B u Bc uey y* A 0 1 1 0 1 0 1 0 1 1 0 0 0 1 1 1 0 0 0 1 1 0 1 0 1 1 1 0 0 0 1 0 0 0 0 1 1 0 0 0 1 0 1 0 0 0 0 0 0 1 1 1 1 0 1 0 0 1 0 1 1 1 1 1 1 1 0 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 0 1 1 1 0 0 0 0 1 1 1 1 1 0 1 1 1 1 1 0 1 0 0 1 0 0 1 0 1 0 1 0 0 1 1 0 1 1 0 0 0 0 1 1 0 1 0 0 0 1 1 0 0 1 1 0 1 0 0 1 0 1 0 0 1 0 c = 1 1 0 1 0 0 0 1 1 0 1 0 0 0 0 1 1 0 0 1 1 0 1 0 1 x*= 1 1 1 0

16. Solving the core equation B x = u The suggested method can be applied when w(e) < . As soon as w(c, s) <  for a current subset C from B, vector (c, s) could be accepted as vector e. The subsets C are checked one by one while increasing number of columns in C up to L - the level of search. The run-time T strongly depends on L, which, in its turn, depends statistically on m, n and w(e), with a great dispersion.

17. Solving the core equation B x = u It follows from here that efficient algorithms can be constructed which solve the problem in the quasi-parallel mode using a set of many (q) canonical forms of system (A, y) with different basics selected at random.

18. Solving the core equation B x = u Additional acceleration in finding a short solution can be achieved by randomization. q different canonical forms are prepared, which have various basics selectedat random. Then the solution is searched in parallel over all these forms, at levels of exhaustive search 0, 1, etc., until at a current level  L  a solution with weight w, satisfying condition w <  1 will be recognized. With raising q this level L can be reduced, as well as the run-time T, which powerfully depends on L.

19. Experiments 10 random overdefined SLLEs (A, y) were prepared with m = 1000, n = 100, and w(e) = 100. Each of them was solved. The level of search was minimized by: randomization – constructing q random equivalent forms (A+, y+) and transforming them to (B,u), solving systems (B,u)in parallel, gradually raising the level of search, restricting the search by recognizing short solutions. Conducting the experiments for q = 1, q = 10 and q = 100 to see how the run-timeTdepend on q.

20. Results of experiments (m=1000, n=100, w(e)=100) q = 1 q = 10 q = 100 № L T L T L T 1 10 2y 3 10s 3 6m 2 12 112y 8 27d 3 6m 3 10 2y 7 4d 3 7m 4 12 112y 5 33m 4 12m 5 10 2y 5 1h 3 7m 6 14 5000y 7 3d 2 6m 7 9 69d 6 3h 4 15m 8 4 12s 4 25s 4 8m 9 6 1h 4 2m 4 9m 10 10 2y 5 52m 5 1h

More Related