Finding an injective mapping with restrictions for values is NP-Complete

Finding an injective mapping with restrictions for values is NP-Complete

This post is about an interesting problem I recently tried to solve efficiently but ended up proving it’s NP\mathcal{NP}-completeness. Informally, the problem is to find an injective mapping between two finite sets with certain, statically defined properties for values. For example, if A={a,b,c,d}A = \{a, b, c, d\} and B={1,2,3,4,5}B = \{1, 2, 3, 4, 5\}, we may want to find an injective mapping f:ABf : A \rightarrow B with restrictions f(a){1,3,4,5}f(a) \in \{1, 3, 4, 5\}, f(b){3,4,5}f(b) \in \{3, 4, 5\}, f(c){1,2,4,5}f(c) \in \{1, 2, 4, 5\}, f(d){1,2,5}f(d) \in \{1, 2, 5\}, f(e){1,2,3,4}f(e) \in \{1, 2, 3, 4\}. I find this problem of particular interest, because it’s nature is in my opinion very different compared to well-known NP\mathcal{NP}-complete problems proven to be so by Karp. In this post we will visualize and formalize this problem, prove it’s NP\mathcal{NP}-completeness and, finally, derive some interesting corollaries.

Efficient algorithm for finding non-productive rules in context-free grammars

Efficient algorithm for finding non-productive rules in context-free grammars

In programs that somehow deal with context-free grammars automatically, we may want to check whether some part of the grammar “makes sense” and isn’t totally redundant and useless. In other words, we may want to check whether some non-terminal or production can come up in some arbitrary sentential form. This problem arises often when the program we are implementing accepts user-defined grammars. In this post, we will consider an existing algorithm that solves the problem as well as introduce and implement a new, more efficient algorithm that detects useless parts of an input grammar.

Constructing Hilbert-style F0 proofs with a simple graph-based notation

Constructing Hilbert-style F0 proofs with a simple graph-based notation

There exist different kinds of deductive systems in logic. Every deductive system consists of certain axioms and rules of inference. The problem of constructing a proof in every such system is quite difficult — computationally it is NP\mathcal{NP}-Hard, since satisfiability can be reduced to it. So it is a good idea to trade-off between logical axioms and inference rules. Hilbert (Frege)-style proof systems are interesting because most of them use several axioms but only one inference rule — modus ponens, unlike, for example, Gentzen natural deduction. In other words, Hilbert-style proof systems “push” all the complexity of constructing a proof into the axioms — it is hard to syntactically instantiate them, but once done it is easier to combine them as there is only one rule of inference — modus ponens. In this post, I will present a notation for these Hilbert-style proofs I came up with recently, that allows one to construct them in a more systematic way, with less “guessing”.

Satisfiability of formulas with both Horn and 2-SAT clauses is NP-Complete

Satisfiability of formulas with both Horn and 2-SAT clauses is NP-Complete

The SAT problem is the first problem proven to be NP\mathcal{NP}-Complete. However, many instances of this problem with certain properties can be solved efficiently in polynomial time. Well-known examples of such easy instances are 2-sat and horn formulas. All clauses in a 2-sat formula have at most 2 literals and horn-formulas consist only of clauses that contain at most one positive literal. In other words, a 2-sat clause is a conjunction of two implications and any horn clause can be rewritten as a conjunction of positive literals that implies either false or a negative literal. It may seem that satisfiability of formulas containing both 2-sat as well as horn clauses can be decided in polynomial time, however, because of the NP\mathcal{NP}-completeness that we will prove shortly, this is not the case unless P=NP\mathcal{P} = \mathcal{NP}. The problem also remains NP\mathcal{NP}-complete for Dual-horn clauses instead of normal horn clauses.

Implementing and visualizing Gram-Schmidt orthogonalization

Implementing and visualizing Gram-Schmidt orthogonalization

In linear algebra, orthogonal bases have many beautiful properties. For example, matrices consisting of orthogonal column vectors (a. k. a. orthogonal matrices) can be easily inverted by just transposing the matrix. Also, it is easier for example to project vectors on subspaces spanned by vectors that are orthogonal to each other. The Gram-Schmidt process is an important algorithm that allows us to convert an arbitrary basis to an orthogonal one spanning the same subspace. In this post, we will implement and visualize this algorithm in 3D with a popular Open-Source library manim.

How propositional logic minimization and gray codes are connected

How propositional logic minimization and gray codes are connected

In this post I will introduce an interesting connection between gray codes and the problem of minimizing propositional logic formulas in disjunctive normal forms (DNF’s). Both of these concepts are related to the Hamming distance between two binary sequences, which is the amount of bits that need to be flipped in one sequence in order to obtain the other one.

The Dining Philosophers problem and different ways of solving it

The Dining Philosophers problem and different ways of solving it

The dining philosophers problem is a well-known problem in computer science, originally formulated by Edsger Dijkstra to illustrate the possibility of deadlocks in programs where multiple threads lock and unlock multiple shared resources, such that a situation in which every thread is waiting for actions from other threads and no thread can thus continue it’s normal execution may occur. We will consider different approaches for solving this problem in the present post.

Compressing images with singular value decomposition (SVD)

Compressing images with singular value decomposition (SVD)

The singular matrix decomposition plays a major role in linear algebra and has a lot of applications, including lossy image compression. In this post we will discuss it in the context of the mentioned image compression with the focus on the intuition behind the algorithm, without going deep into the theory.

Knuth-Morris-Pratt (KMP) algorithm explained

Knuth-Morris-Pratt (KMP) algorithm explained

Almost every program has to somehow process and transform strings. Very often we want to identify some fixed pattern (substring) in the string, or, formally speaking, find an occurrence of β:=b0b1bn1\beta := b_0b_1 \dots b_{n-1} in α:=a0a1am1ε\alpha := a_0a_1 \dots a_{m-1} \neq \varepsilon. A naive algorithm would just traverse α\alpha and check if β\beta starts at the current character of α\alpha. Unfortunately, this approach leads us to an algorithm with a complexity of O(nm)O(n \cdot m). In this post we will discuss a more efficient algorithm solving this problem - the Knuth-Morris-Pratt (KMP) algorithm.

Pagination


Copyright © 2019 — 2021 Alexander Mayorov. All rights reserved.