A new set of Lecture notes is available here. These ones are about group testing and contain a very brief “crash-course” on error-correction codes. They also include five open problems. As usual, I will document the open problems here, while referring a much more detailed description of the problems on the notes.

Originally there was a problem (numbered 7.1.) about group testing bounds. Thanks to the reader who pointed out in the comments that this problem was solved in 2014! We replaced it with problem 4.6. here.

**Open Problem 7.1. ( Gilbert-Varshamov bound)**

The Gilbert-Varshamov bound is an important (lower) bound on the optimal rate of error-correction, however neither it is known to be tight not it has been achieved by explicit deterministic constructions of codes (see notes). This motivates two questions:

- Explicit deterministic constructions of codes achieving the GV bound
- Is the GV bound tight?

**Open Problem 7.2. ( Boolean Classification and Annulus Conjecture)**

In traditional error-correcting codes, one is interested in linear codes with large distance (or, in other words, avoiding the pinched ball of non-zero points with Hamming weight ). Inspired by classification applications one can relax this requirement and ask that the linear code avoids an annulus where is the Hamming weight. We define $latex R_A(a,b,n)$ as the optimal rate for such a code. The open problem is to resolve the conjecture on this paper:

Prove or disprove: $latex R_A(\alpha n,\beta n,n)=\alpha + (1 – \alpha) R_A(1,\beta n,(1-\alpha))+o(1)$

We refer to the notes for more details.

**Open Problem 7.3. ( Shannon Capacity of 7 Cycle)**

What is the Shannon Capacity of the 7-Cycle?

This is a classical (and fascinating) question in graph theory. See the notes for the definitions, more details and references (see also this blog post).

**Open Problem 7.4. ( The Deletion Channel)**

Say we have to send a binary string “10010” through a deletion channel and the first and second bits get deleted, then the message receive would be “010” and the receiver would not know which bits were deleted. This is in contrast with the erasure channel where bits are erased but the receiver knows which bits are missing (in the case above the message received would be “??010”). The so-called Trace Reconstruction problem is the problem where the same message is sent multiple times and the goal of the receiver is to find exactly the original message sent from the many observed corrupted version of it. We will be interested in the following quantity:

Draw a random binary string with bits, suppose the channel has a deletion probability of for each bit (independently), define has the number of times the receiver needs to receive the message (with independent corruptions) so that she can decode the message exactly, with high probability. It is easy to see that , since roughly once in every $2^n$ times the whole message will go through the channel unharmed. It is possible to show that but it is not known whether this bound is tight.

What are the asymptotics of ?

\item An interesting aspect of the Deletion Channel is that different messages may have different difficulties of decoding. This motivates the following question: What are the two (distinct) binary sequences and that are more difficult to distinguish (let’s say that the receiver knows that either or was sent but not which)?

Hi, your open problem 7.1 has been solved in the paper A. G. D’’yachkov, I. V. Vorob’ev, N. A. Polyansky, and V. Yu. Shchukin:

Bounds on the rate of disjunctive codes,

Problems of Information Transmission, Vol. 50, No. 1, pp. 27––56 (2014). In this paper it is shown that the lower bound of Theorem 7.2 of your notes is tight.

Many thanks for the reference! I’ll edit the notes accordingly and replace it for another problem.

Afonso

The problem was removed and replaced by 4.6., thank you!

It seems that the authors of A. G. D’’yachkov et al.: Bounds on the rate of disjunctive codes,

Problems of Information Transmission, Vol. 50, No. 1, pp. 27––56 (2014) have now retracted their claim. Please see their erratum at DOI: 10.1134/S0032946016020083

I apologize if I created confusion!

Thanks for letting us know! I’ll add the problem back later this semester! Afonso