NOTE: This website is no longer maintained, please visit my current website at http://www.cs.rit.edu/~ib.
Ivona
Bezakova
Ph.D. student
Department of Computer Science
University of Chicago
Email: firstname @ cs.uchicago.edu
Papers:

Sampling Binary Contingency Tables with a Greedy Start,
with Nayantara Bhatnagar and Eric Vigoda.
Submitted to Random Structures & Algorithms.
Extended abstract appears in SODA '06.
We study the problem of counting and randomly sampling binary
contingency tables. For given row and column sums, we are interested
in approximately counting (or sampling) 0/1 n x m matrices
with specified row/column sums. We present a simulated annealing
algorithm with running time O((nm)^2D^3d_{max}log^5(n+m)) for any
row/column sums where D is the number of nonzero entries and
d_{max} is the maximum row/column sum. This is the first algorithm
to directly solve binary contingency tables for all row/column sums.
Previous work reduced the problem to the permanent, or restricted
attention to row/column sums that are close to regular. The
interesting aspect of our simulated annealing algorithm is that it
starts at a
nontrivial instance, whose solution relies on the existence of short
alternating paths in the graph constructed by a particular Greedy
algorithm.

Accelerating Simulated Annealing Algorithm for the Permanent
and Combinatorial Counting Problems,
with Daniel Stefankovic, Vijay V. Vazirani, and Eric Vigoda.
Submitted to SIAM Journal on Computing (SICOMP).
Extended abstract appears in SODA '06.
Powerpoint slides are here:
[ppt].
We present an improved "cooling schedule" for simulated annealing
algorithms for combinatorial counting problems. Under our new
schedule the rate of cooling accelerates as the temperature decreases.
Thus, fewer intermediate temperatures are
needed as the simulated annealing algorithm moves from the high
temperature (easy region) to the low temperature (difficult region).
We present applications of our technique to colorings and the
permanent (perfect matchings of bipartite graphs).
Moreover, for the permanent, we improve the analysis of the Markov
chain underlying the simulated annealing algorithm. This improved
analysis, combined with the faster cooling schedule, results in an
O(n^7log^4{n}) time algorithm for approximating the
permanent of a 0/1 matrix.
 Graph Model Selection using
the Maximum Likelihood Paradigm,
with Adam Kalai, and Rahul Santhanam.
Appears in ICML '06.
A more networksoriented version is here. Preprint.
In recent years, there has been a proliferation of theoretical graph
models, e.g., preferential attachment and smallworld models,
motivated by realworld graphs such as the Internet topology. To
address the natural question of which model is best
for a particular data set, we propose a model selection criterion for
graph models. Since each model is in fact a probability distribution
over graphs (equivalently a compression scheme), we suggest using the
Maximum Likelihood (ML) principle to compare graph models and select
their parameters. Unfortunately, for the
case of graph models, computing this metric is a difficult algorithmic
task. However, we design and implement MCMC algorithms for computing
the maximum likelihood for four popular models: a powerlaw random
graph model, a preferential attachment
model, a smallworld model, and a uniform random graph model. We hope
that this novel use of ML and our new algorithms will find other
applications.

Analysis of Sequential Importance Sampling for Contingency Tables,
with Alistair Sinclair, Daniel Stefankovic, and Eric Vigoda.
Appears in ESA '06.
The sequential importance sampling (SIS) algorithm
has gained considerable popularity for its empirical success.
One of its noted applications is to
the binary contingency tables problem, an important problem in statistics,
where the goal is to estimate the number of $0/1$ matrices with prescribed row and column sums.
We give a family of examples in which the SIS
procedure, if run for any subexponential number of trials,
will underestimate the number of tables
by an exponential factor.
This result holds for any of the usual design choices in the SIS algorithm,
namely the ordering of the columns and rows.
These are apparently the first theoretical results on the
efficiency of the SIS algorithm for binary contingency tables.
Finally, we present
experimental evidence that the SIS algorithm is efficient for row and column sums
that are regular. Our work is a first step in determining the class of inputs
for which SIS is effective.

Allocating Indivisible Goods,
with Varsha Dani.
Preprint.
Survey of results invited for publication in ACM SIGecom Exchanges,
Vol. 5.3, 2005.
The MaxMin Fairness problem is as follows: Given m indivisible
goods and k players, each with a specified valuation function on the
subsets of the goods,
how should the goods be split between the players so as to maximize
the minimum valuation. Viewing the problem from a game theoretic
perspective, we show that for two
players and additive valuations the expected minimum of the
(randomized) cutandchoose mechanism is a 1/2approximation of the
optimum.
To complement this result we show that no truthful mechanism can
compute the exact optimum. We also consider the algorithmic
perspective when the (true)
additive valuation functions are part of the input. We present a simple
1/(mk+1) approximation algorithm which allocates to every player at
least 1/k fraction of the value of all but the k1 heaviest items
and we give an algorithm with additive error against the fractional
optimum bounded by the value of the largest item. The two
approximation algorithms are incomparable in the sense that there
exist instances when one outperforms the other.
 The
Poincare Constant
of a Random Walk in HighDimensional Convex Bodies,
Master Thesis at the
University of Chicago, 2002.
Estimating the volume of a convex body is an important algorithmic
problem. High dimensions pose an especially difficult task for the
algorithm designer. To be considered efficient, the running time must
be polynomial in the dimension. The first provably efficient
approximation algorithm was presented by Dyer, Frieze, and Kannan
in 1989, and steadily improved by various authors over the next
decade. Fundamental to these works is the analysis of random walks for
generating random points from a convex body.
Kannan, Lovasz, and Simonovits recently analyzed a natural random
walk known as the ball walk. By bounding the socalled conductance,
they obtain (via a Cheegertype inequality) a bound on the Poincare
constant of the random walk. Their bound proves
the walk converges to a random point in time O*(n^3D^2), where n
is the dimension and D is the diameter of the body. We survey and
present a selfcontained view of their work. In addition, following an
outline proposed by Jerrum, we slightly
modify the KLS analysis to bound the Poincare constant without using
a Cheegertype inequality.
 Compact
Representations of Graphs and Adjacency Testing,
"Magister" Thesis at the Comenius University, Bratislava, Slovakia, 2000.
A section of the thesis appeared at the Student Science Conference, Bratislava, Slovakia, 2000.
A succinct representation of a graph is a widely studied problem. We
examine this representation in two aspects: (1) its space complexity,
and (2) the time complexity of the basic "graph operations"  the
adjacency test and update operations (adding or removing an edge).
The adjacency matrix is optimal with respect to the adjacency test but
for n vertices its space complexity is always O(n^2) bits,
regardless of the number of edges m. The adjacency list
representation is spaceoptimal but the adjacency test takes more
time than it necessarily needs. The aim of this thesis was to find a
representation as close as possible to the optimal space complexity of
O(m log n) bits, and to the optimal time complexity of O(log n)
bitoperations for the adjacency test (and the update operations).
We present a representation using the technique of hashing: it takes
O(m log n) bits and the adjacency test can be performed in O(log
n loglog n) bitoperations. The
"listofallneighbors" procedure takes O((k + loglog n)log n)
bitoperations, where k is the number of neighbors to be listed, and
the expected amortized time complexity of the update operations is
O(log n loglog n).
We also suggest a new representation based on computing the distance
from all vertices to some given vertex. We show that this
representation is optimal (in both time and space complexities) for
the family of planar graphs.
 Planar Finite
Automata,
with Martin Pal.
Student Science Conference, Bratislava, Slovakia, 1999.
Several aspects of graph representations of finite automata have been
studied in the literature, e.g., the complexity considerations
involving the number of states. It is natural to
ask whether we indeed need the full generality of possible
interconnections among the states provided by the general definition
of finite automata. We study the influence of restricting the
interconnection network to planar graphs.
We introduce planar finite automata as a type of automata whose graph
representation is planar. We study families of languages defined by
deterministic, nondeterministic, and epsilonfree nondeterministic
planar finite automata and compare them to
R, the family of regular languages. We show that the
deterministic planar automata are weaker than finite automata, except
for the case of a oneletter alphabet. We also prove that
R is equivalent to the family of languages accepted by
(epsilonfree) nondeterministic planar automata.
In addition, we study the influence of restricting the graph
representation to a particular
architecture studied in parallel and distributed computing, the
ddimensional mesh. We show that for any k there is a language
which cannot be accepted by a mesh kepsilonbounded nondeterministic
finite automaton, i.e., an automaton whose graph representation can be
embedded into a mesh and its maximal number of epsilon
moves is bounded by k.
Course Projects:
Last Update: May 30, 2006