Satisfiability of formulas with both Horn and 2-SAT clauses is NP-Complete

Satisfiability of formulas with both Horn and 2-SAT clauses is NP-Complete

The SAT problem is the first problem proven to be NP\mathcal{NP}-Complete. However, many instances of this problem with certain properties can be solved efficiently in polynomial time. Well-known examples of such easy instances are 2-sat and horn formulas. All clauses in a 2-sat formula have at most 2 literals and horn-formulas consist only of clauses that contain at most one positive literal. In other words, a 2-sat clause is a conjunction of two implications and any horn clause can be rewritten as a conjunction of positive literals that implies either false or a negative literal. It may seem that satisfiability of formulas containing both 2-sat as well as horn clauses can be decided in polynomial time, however, because of the NP\mathcal{NP}-completeness that we will prove shortly, this is not the case unless P=NP\mathcal{P} = \mathcal{NP}. The problem also remains NP\mathcal{NP}-complete for Dual-horn clauses instead of normal horn clauses.

Implementing and visualizing Gram-Schmidt orthogonalization

Implementing and visualizing Gram-Schmidt orthogonalization

In linear algebra, orthogonal bases have many beautiful properties. For example, matrices consisting of orthogonal column vectors (a. k. a. orthogonal matrices) can be easily inverted by just transposing the matrix. Also, it is easier for example to project vectors on subspaces spanned by vectors that are orthogonal to each other. The Gram-Schmidt process is an important algorithm that allows us to convert an arbitrary basis to an orthogonal one spanning the same subspace. In this post, we will implement and visualize this algorithm in 3D with a popular Open-Source library manim.

How propositional logic minimization and gray codes are connected

How propositional logic minimization and gray codes are connected

In this post I will introduce an interesting connection between gray codes and the problem of minimizing propositional logic formulas in disjunctive normal forms (DNF’s). Both of these concepts are related to the Hamming distance between two binary sequences, which is the amount of bits that need to be flipped in one sequence in order to obtain the other one.

The Dining Philosophers problem and different ways of solving it

The Dining Philosophers problem and different ways of solving it

The dining philosophers problem is a well-known problem in computer science, originally formulated by Edsger Dijkstra to illustrate the possibility of deadlocks in programs where multiple threads lock and unlock multiple shared resources, such that a situation in which every thread is waiting for actions from other threads and no thread can thus continue it’s normal execution may occur. We will consider different approaches for solving this problem in the present post.

Compressing images with singular value decomposition (SVD)

Compressing images with singular value decomposition (SVD)

The singular matrix decomposition plays a major role in linear algebra and has a lot of applications, including lossy image compression. In this post we will discuss it in the context of the mentioned image compression with the focus on the intuition behind the algorithm, without going deep into the theory.

Knuth-Morris-Pratt (KMP) algorithm explained

Knuth-Morris-Pratt (KMP) algorithm explained

Almost every program has to somehow process and transform strings. Very often we want to identify some fixed pattern (substring) in the string, or, formally speaking, find an occurrence of β:=b0b1bn1\beta := b_0b_1 \dots b_{n-1} in α:=a0a1am1ε\alpha := a_0a_1 \dots a_{m-1} \neq \varepsilon. A naive algorithm would just traverse α\alpha and check if β\beta starts at the current character of α\alpha. Unfortunately, this approach leads us to an algorithm with a complexity of O(nm)O(n \cdot m). In this post we will discuss a more efficient algorithm solving this problem - the Knuth-Morris-Pratt (KMP) algorithm.

Visualizing 3D linear transformations and Gaussian elimination with Python and Manim

Visualizing 3D linear transformations and Gaussian elimination with Python and Manim

Matrices are omnipresent in linear algebra. Columns of a matrix describe where the corresponding basis vectors land relative to the initial basis. All transformed vectors are linear combinations of transformed basis vectors which are the columns of the matrix, this is also called linearity. Algorithms that operate on matrices essentially just alter the way vectors get transformed, preserving some properties. Unfortunately, many algorithms are typically presented to students in a numerical fashion, without describing the whole graphical meaning. In this post we will first visualize simple linear transformations and then we will visualize Gaussian Elimination (with row swaps) steps as a sequence of linear transformations. To do this, we will use Python together with a popular Open-Source library manim.

Using the PA=LU factorization to solve linear systems of equations for many right-hand sides efficiently

Using the PA=LU factorization to solve linear systems of equations for many right-hand sides efficiently

Linear systems of equations come up in almost any technical discipline. The PA=LUPA=LU factorization method is a well-known numerical method for solving those types of systems of equations against multiple input vectors. It is also possible to preserve numerical stability by implementing some pivot strategy. In this post we will consider performance and numerical stability issues and try to cope with them by using the PA=LUPA = LU factorization.

Measuring the size of a regular language is NP-Hard

Measuring the size of a regular language is NP-Hard

In this post we will examine an interesting connection between the NP\mathcal{NP}-complete CNF-SAT problem and the problem of computing the amount of words generated by some efficient representation of a formal language. Here, I call a way of representing a formal language efficient, if it has a polynomially-long encoding but can possibly describe an exponentially-large formal language. Examples of such efficient representations are nondeteministic finite automations and regular expressions.

Pagination


Copyright © 2019 — 2021 Alexander Mayorov. All rights reserved.