Eine der zentralen Aktivitäten des Centers ist das Gerard-Woeginger-Kolloquium, benannt nach unserem verstorbenen Kollegen, der von 2016 bis 2022 Professor für Algorithmen und Komplexität an der RWTH war. Zwei Mal pro Semester organisieren wir einen Nachmittag mit einem eingeladenen Hauptvortrag und Kurzvorträgen durch RWTH-DoktorandInnen und Postdocs.
Date |
Details |
10th Colloquium July 3, 2025 |
Martin Skutella (TU Berlin)
TBA
Invited Talk
Martin Skutella (TU Berlin) TBA
Abstract: coming soon
Contributed talks
|
9th Colloquium May 14, 2025 |
László Végh (University of Bonn)
TBA
Invited Talk
László Végh (University of Bonn) TBA
Abstract: coming soon
Contributed talks
|
8th Colloquium January 24, 2025 |
Marie Schmidt (Universität Würzburg)
Robust Multi-Objective Optimization
Invited Talk
Marie Schmidt (Universität Würzburg) Robust Multi-Objective Optimization
Abstract: When modeling real-world challenges as optimization problems, we often encounter uncertainty in problem parameters; as well as the coexistence of multiple goals which are difficult to trade-off against each other.
Robust optimization is an approach that addresses the challenge of parameter uncertainty, aiming to find a solution that is feasible under all scenarios (parameter realizations) and best in the worst-case. Multi-objective optimization addresses the challenge of multiple objective functions by introducing the concept of efficiency (or Pareto optimality), which says that a solution x is worth to look at if there is no other solution which is at least as good as x with respect to all considered goals, and better in at least one of them.
Several ideas have been proposed to combine these two concepts into the concept of robust efficiency, and we briefly illustrate these before we turn to solution approaches for robust multi-objective problems.
There are (at least) two ways to design solution approaches for computing robust efficient solutions: we can try to generalize algorithms for (single-objective) robust optimization to the multi-objective case, or generalize algorithms for (deterministic) multi-objective optimization. Though most of the approaches to be discussed are applicable to a wider class of problems, for the presentation we focus on biobjective combinatorial problems with budgeted uncertainty in both objective functions using the concept of point-wise robust efficiency.
Contributed talks
- Timo Gersing (Combinatorial Optimization, RWTH)
Fair Planning of the Out-of-Hours Service for PharmaciesWe consider the problem of planning the out-of-hours service for pharmacies, which ensures a continuous supply of pharmaceuticals. The problem consists in assigning 24-hour shifts to a subset of pharmacies on each day such that an appropriate supply is guaranteed while the burden on pharmacists is minimized. Using a model developed in collaboration with the Chamber of Pharmacists North Rhine, we compute plans minimizing the total number of shifts. The computed plans save many shifts compared to the real plan of the Chamber of Pharmacists North Rhine, but they exhibit an unfair concentration of burden to the disadvantage of some pharmacies. We propose a strategy for integrating fairness into the planning based on a lexicographic fairness criterion. Using theoretical insights, we compute out-of-hours plans that are almost maximally fair while also assigning only a few more shifts.
- Vladimir Stadnichuk (Operations Management, RWTH)
Solving Multi-Follower Mixed-Integer Bilevel Problems with Binary Linking VariablesWe study multi-follower bilevel optimization problems with binary linking variables where the second level consists of many independent integer-constrained subproblems. This problem class not only generalizes many classical interdiction problems but also arises naturally in many network design problems where the second-level subproblems involve complex routing decisions of the actors involved. We propose a novel branch-and-cut decomposition method that starts by solving the first level and then iteratively generates second-level feasibility and optimality cuts that are obtained by solving a slightly adjusted second-level problem. Compared to many other existing solution methods, we do not rely on solving the HPR of the bilevel problem but fully project out the second level, resulting in significant computational advantages when the second-level problem is very large or possesses a weak LP relaxation. Also, our approach can be fully automated within a MIP solver, making it very easy to apply for those who do not want to design problem-tailored algorithms for their bilevel problem. Computational results for a bilevel network design problem demonstrate that our approach efficiently solves instances with hundreds of subproblems in a few minutes, significantly outperforming the Benders-like decomposition from the literature on challenging instances.
|
7th Colloquium November 22, 2024
|
Ivana Ljubić (ESSEC Business School)
Introduction to computational bilevel optimization with an application to a pricing problem for last-mile delivery
Invited Talk
Ivana Ljubić (ESSEC Business School of Paris) Introduction to computational bilevel optimization with an application to a pricing problem for last-mile delivery
Abstract: Thanks to significant algorithmic advances in the field of computational (bilevel) optimization, today we can solve much larger and more complicated bilevel problems compared to what was possible two decades ago. In this talk, we will first discuss classical reformulation and solving techniques for bilevel optimization problems. In the second part of the talk, we will focus on a challenging pricing problem arising in last-mile delivery. In this problem, a peer-to-peer logistic platform that has to deliver a certain set of parcels, decides how to assign the orders to the available set of carriers. At the lower level, each carrier decides whether to accept the offer, and which parcels should be delivered (if any). The platform can influence carriers' decisions by anticipating the carriers' response and by determining the compensation paid for each accepted request. The problem is modeled as a bilevel optimization problem with an NP-hard profitable tour problem at the lower level. A solution approach based on a branch-and-cut procedure is then proposed to obtain provably optimal solutions.
References:
T. Kleinert, M. Labbé, I. Ljubić, M. Schmidt: Survey on Mixed-Integer Programming Techniques in Bilevel Optimization, EURO J. Comput. Optim. 9: 100007, (2021)
M. Cerulli, C. Archetti, E. Fernandez, I. Ljubić: A bilevel approach for compensation and routing decisions in last-mile delivery, Transportation Science (2024)
Contributed talks
- Lars Huth (Informatik 1, RWTH)
Claims Trading with Default CostsIn the Eisenberg-Noe model, a model for debt between financial institutions and quantifying systemic risk, there are n banks represented by nodes in a directed graph. Directed edges represent debt claims, and edge weights are liabilities. Each bank can use incoming payments to clear its debt. Moreover, each bank has a non-negative amount of external assets, which can also be used for payment of claims.
With this talk, I will convey basic intuitions for problems in the model, specifically claims trades. In claims trades, there is a bank v in distress and a trading partner w. The latter is taking over some claims of v and in return giving liquidity to v. The idea is to rescue v (or mitigate contagion effects from v's insolvency). We focus on the impact of trading claims fractionally, i.e. when v and w can agree to trade only part of a claim. This is joint work with Martin Hoefer and Lisa Wilhelmi.
- Simon Thomä (Operations Management, RWTH)
A Note on Piecewise Affine Decision Rules for Robust, Stochastic, and Data-Driven OptimizationMulti-stage decision-making under uncertainty, where decisions are taken under sequentially revealing uncertain problem parameters, is often essential to faithfully model managerial problems. Given the significant computational challenges involved, these problems are typically solved approximately. This short note introduces an algorithmic framework that revisits a popular approximation scheme for multi-stage stochastic programs by Georghiou et al. (2015) and improves upon it to deliver superior policies in the stochastic setting, as well as extend its applicability to robust optimization and a contemporary Wasserstein-based data-driven setting. We demonstrate how the policies of our framework can be computed efficiently, and we present numerical experiments that highlight the benefits of our method.
- Tabea Brandt (Combinatorial Optimization, RWTH)
Ensuring optimal roommate fit in the
patient-to-room assignment problemAssigning patients to rooms is a fundamental task in hospitals and, especially, within wards. This so-called patient-to-room assignment problem (PRA) has gained more and more attention in the last few years and many heuristics have been proposed with a large variety of different practical constraints reflecting different settings in hospitals. Classical objectives are avoiding patient transfers between rooms, respecting single-room requests, or choosing patients of similar age as roommates. Apart from age difference, there are many other potential criteria for determining the roommate suitability. In this talk, we show how a feasible patient-to-room assignment with optimal roommate fit can be computed efficiently.
|
6th Colloquium June 28, 2024
|
Frauke Liers (University Erlangen-Nürnberg)
Robust Optimization: Mixed-Integer Decisions in a Nonlinear Context and Learning Decisions Over Time
Invited Talk
Frauke Liers (University Erlangen-Nürnberg) Robust Optimization: Mixed-Integer Decisions in a Nonlinear Context and Learning Decisions Over Time
Abstract: Often, determining optima resilient to uncertainty is imperative. Robust optimization ensures feasible solutions as long as the input data manifest themselves within predefined uncertainty sets. We will start by reviewing some concepts in robust optimization. We will highlight novel approaches in mixed-integer (non-)linear robust optimization. We will also point out developments in data-driven methods for constructing appropriate uncertainty sets, also covering recent work where the distributions together with best possible decisions are learnt over time.
Contributed talks
- Matthias Gehnen (Theoretical Computer Science, RWTH)
Online Feedback Vertex SetA common way to present graphs in an online fashion is by presenting them vertex-wise, together with all the edges that are induced by the already presented vertices. An online algorithm therefore needs to decide what he does with the current input at a defined time, usually after a vertex got presented or some other property is fulfilled.
In this talk we will focus on the Online Feedback Vertex Set Problem, where a graph gets presented vertex-wise, and the online algorithm needs to delete a vertex as soon as he sees an circle.
If the circle got destroyed, the instance continues. We will show that the Online Feedbeck Vertex Set has an approximation ratio between four and five (also called competitive ratio). This talk bases on a chapter of "Delaying Decisions and reservation costs" of last year's COCOON.
- Lennart Kauther (Management Science, RWTH)
Fare Zone AssignmentWe introduce a novel optimization problem inspired by the pricing of public transport tickets. We assume a straightforward pricing model where each customer pays a fixed price for each fare zone traversed. Additionally, based on empirical studies, we assume that customers have a preferred route and will opt for alternative transportation methods if their individual cost ceilings are exceeded.
Formally, the problem can be described as follows: Given an undirected graph G=(V,E) and k commodities, where each commodity represents a group of customers with identical routes and cost ceilings, the objective is to partition the graph into fare zones such that the provider's revenue is maximized.
The structure of the profit function renders the problem algorithmically intriguing. In the trivial solution with a single fare zone, revenue is collected for all commodities. However, if a commodity crosses too many fare zones, the associated profit drops to zero as customers choose alternative transportation options.
We demonstrate that this problem is NP-hard for nearly every non-trivial graph class. Specifically, it is NP-hard even on paths and APX-hard on stars. Currently, we are exploring complementary positive complexity results through various approaches, including a non-trivial linear programming (LP) formulation.
- Stephan Marnach (Discrete Optimization, RWTH)
Graph burning - complexity and formulationsGraph burning is a discrete-time process mimicking the spread of fire in graphs and was introduced to model the spread of influence in networks. In every step existing fires spread to neighboring vertices and a new vertex is set ablaze. Its associated graph parameter, the burning number, denotes the minimal number of steps needed to fully burn a graph. We explore the complexity and mathematical programming formulations of graph burning and its variations.
|
Bonustalk May 16, 2024
|
Ulrich Pferschy (University Graz)
Optimal rescheduling with new jobs under bounded disruption
Invited Talk
Ulrich Pferschy (University Graz) Optimal rescheduling with new jobs under bounded disruption
Abstract: Rescheduling problems arise when a given schedule has to be modified due to an interruption or unexpected event. We consider the arrival of new orders to be integrated into a given production sequence of so-called old jobs. To avoid a major disruption of the original schedule, the completion time of each old job in the new sequence should not deviate from its original completion time in the old sequence by more than a certain threshold. We consider the minimization of the number of late jobs and the minimization of the total weighted completion time. Our results consist of NP-hardness results, an exact dynamic programming algorithm and a fully polynomial time approximation scheme (FPTAS).
|
5th Colloquium April 19, 2024
|
Ulrike Schmidt-Kraepelin (TU Eindhoven)
Monotone Randomized Apportionment Jannik Matuschke (KU Leuven) Decomposition of Probability Marginals for Security Games, Robust Optimization, and Committee Election
Invited Talks
Ulrike Schmidt-Kraepelin (TU Eindhoven) Monotone Randomized Apportionment
Abstract: Apportionment is the act of distributing the seats of a legislature among political parties (or states) in proportion to their vote shares (or populations). A famous impossibility by Balinski and Young (2001) shows that no apportionment method can be proportional up to one seat (quota) while also responding monotonically to changes in the votes (population monotonicity). Grimmett (2004) proposed to overcome this impossibility by randomizing the apportionment, which can achieve quota as well as perfect proportionality and monotonicity - at least in terms of the expected number of seats awarded to each party. Still, the correlations between the seats awarded to different parties may exhibit bizarre non-monotonicities. When parties or voters care about joint events, such as whether a coalition of parties reaches a majority, these non-monotonicities can cause paradoxes, including incentives for strategic voting. We propose monotonicity axioms ruling out these paradoxes, and study which of them can be satisfied jointly with Grimmett's axioms. Essentially, we require that, if a set of parties all receive more votes, the probability of those parties jointly receiving more seats should increase. Our work draws on a rich literature on unequal probability sampling in statistics (studied as dependent randomized rounding in computer science). Our main result shows that a sampling scheme due to Sampford (1967) satisfies Grimmett's axioms and a notion of higher-order correlation monotonicity.
Jannik Matuschke (KU Leuven) Decomposition of Probability Marginals for Security Games, Robust Optimization, and Committee Election
Abstract: In this talk, we discuss the following decomposition problem: We are given a set system, i.e., a collection of subsets of a common ground set (e.g., the ground set could be the edges of a graph and the members of the set system could be the source-sink-paths in that graph). In addition, we are given a vector of probability marginals on the elements of the ground set and a requirement value for each member of the set system. Our goal is to find a distribution for a random subset of the ground set that is consistent with the given marginals and that intersects each member of the set system with a probability equal to or exceeding its requirement value.
Extending earlier work by Dahan, Amin, and Jaillet (MOR 2022), we present a generic algorithm that solves the above decomposition problem for a variety of settings, including cases in which the requirement values exhibit a ``supermodularity-like structure'', such as abstract networks and Hoffman-Schwartz-type lattice polyhedra. We also discuss numerous applications of our results: (i) security games in which authorities deter smugglers or saboteurs, (ii) robust optimization with uncertain coverage objectives, (iii) committee election with diversity constraints.
|
4th Colloquium January 19, 2024
|
Jens Vygen (University Bonn)
From TSP to Vehicle Routing - Theory and Practice
Invited Talk
Jens Vygen (University Bonn) From TSP to Vehicle Routing - Theory and Practice
Abstract: Over the past few years, our understanding of the approximability of the TSP and natural extensions has improved substantially. Our brief survey here will also include recently discovered reductions, including the first improvement for capacitated vehicle routing since the 1980s, as well as open problems. Moreover, we outline our techniques for handling vehicle routing instances in industrial practice, with many constraints and time-dependent travel times.
Contributed talks
- Rossana Cavagnini (Deutsche Post Chair of Optimization of Distribution Networks)
A matheuristic for the edge set vehicle routing problem with time windows and variants To reduce emissions, traffic, and noise, municipalities and governments are imposing fees and toll schemes to logistic companies for accessing particular roads. Because such measures increase delivery costs, transportation companies have to consider them when planning routes. In this talk, we address the edge set vehicle routing problem with time windows (ESVRPTW) initially introduced by Reinhardt et al. (2016). The ESVRPTW is a generalization of the vehicle routing problem with time windows in which a fixed cost must be paid for accessing a set of edges. We propose an improved formulation for the ESVRPTW, and new variable fixing techniques and valid inequalities. We introduce a new variant of the ESVRPTW to represent urban contexts in which waiting at customers for the opening of their time windows is not allowed. To solve these problems, we present a matheuristic, called MS-LBM, that consists of a multi-start local descent framework that uses a local branching scheme. In addition to its flexibility in solving all ESVRPTW variants addressed in this talk, MS-LBM provides high-quality solutions faster than a commercial optimization solver and finds new best-known solutions. With an extensive computational study, we derive insights into how allowing waiting at customers affects the edge-set activation and routing decisions depending on the type of infrastructure for which fixed costs must be paid.
- Felix Engelhardt (Research and Teaching Area Combinatorial Optimization)
Integer Programming for Patient-to-Room-AssignmentProviding affordable healthcare is a major societal challenge, both in Germany and abroad. In this context, hospitals play a central role, with beds being one critical ressource therein. Efficiently managing beds serves multiple objectives: it can both help save "non-value-added-time", i.e. by avoiding unecessary reassignments, and generate revenue by ensuring private room entitlements are met. However, even in large hospitals such as the RWTH Aachen University Hospital, bed management is currently done by hand.
This talk presents binary integer programming formulations that can exactly solve bedmanagement, i.e. patient-to-room assignment problems (PRA) in (fractions of) seconds for even the largest real-world instances. This is novel insofar as that the current state of the art either relies on heuristic methods or makes very strong assumptions before employing integer programming. The proposed work does neither and is able to capture all relevant special cases through simple model extensions.
The aforementioned results are made possible through a combination of combinatorial insights into possible objective values, and restrictions of the solution space in terms of possible transfers. Combining both allows us to solve a sequence of heuristic problem variations while also proving optimality combinatorically.
The results are verified through a computational study, showcasing that we achieve almost optimal results for single room entitlements and a reduction of transfers by an order of magnitude compared to real-world data, even in the context of a dynamic model with uncertain patient arrivals.
|
3rd Colloquium November 24, 2023
|
Lars Rohwedder (University Maastricht)
Simpler and stronger approximation algorithms for flow time scheduling
Invited Talk
Lars Rohwedder (University Maastricht) Simpler and stronger approximation algorithms for flow time scheduling
Abstract: Flow time measures the duration from release until completion of a job. It is one of the most natural, but also most notorious objectives in scheduling. In their influential survey from 1999, Schuurman and Woeginger already highlighted the open question of obtaining a polynomial time constant approximation for minimizing the sum of weighted flow times in the seemingly simple single machine setting. This was only recently solved in a combination of a tour-de-force by Batra, Garg, and Kumar and a clever instance reduction by Feige, Kulkarni, and Li. Afterwards, Armbruster, Wiese, and I improved this to a PTAS. Out of these insights we were also able to derive a much simpler constant approximation algorithm, for which I will give close to a full proof here. At the end of the talk, I will conclude with some related open questions.
Contributed talks
- Daniel Mock (Chair of Theoretical Computer Science, RWTH)
Solving a Family of Multivariate Optimization and Decision Problems on Classes of Bounded Expansion Many NP-hard or W[1]-hard problems are (fixed-parameter) tractable when restricted to special graph classes, especially sparse ones like planar graphs, graphs of bounded treewidth or even graph classes excluding minors. This is prominently captured by powerful meta-theorems like Courcelle's theorem or the result of Grohe, Kreutzer and Siebertz: Every problem expressible in first-order logic is fixed-parameter tractable on nowhere dense classes of graphs. Classes of bounded expansion and those classes generalize many familiar classes of sparse graphs and are in some sense the most general robust notion of sparsity. For example, this result implies that the k-Dominating Set Problem has an f(k) n^(1+o(1)) algorithm on all nowhere dense classes. However, it does not cover problems that use counting in some sense as first-order logic is not able to count. An example is the Partial Dominating Set Problem which asks, given integers k and t, for a set of k vertices that dominate t vertices. Or the Partial Red-Blue Dominating Set problem which asks to find k vertices that dominate t_r of the red and t_b of the blue vertices. Those problems can be expressed in the counting logic FO(>0) which augments first-order logic with a special kind of counting quantifiers.
Our contribution is a model-checking algorithm for formulas of the form ''? = ?x1...?x? P(#y ?_1(x1...x?,y), ..., #y ?_r(x1...x?,y))'', where P is some predicate on numbers and ?_i are first-order formulas. The running time is f(|?|)n^(r+1) polylog(n) on graph classes of bounded expansion. We also provide a lower bound under SETH which is tight up to an almost linear factor.
In this talk, I will present the notion of sparsity in graphs, especially classes of bounded expansion, some known meta-theorems for logics with and without counting, and our contribution (without any proofs!).
- Jenny Segschneider (Discrete Optimization, RWTH)
Including Uncertain Capacity in b-matchings while remaining robustThe b-matching problem is a well-known variant of the famous matching problem with many applications. Given an undirected graph, each vertex v has to be matched b(v) times and edges can be used multiple times. It is also solvable in polynomial time. Applications include matching workers or produce to customers, as in the Chinese Postman Problem, but it is also used in Heuristics for other well-known Problems like TSP. In theory, the capacity b(v) is fixed and known for each vertex. However, in practice, the capacity is often subject to uncertainties, as workers get sick or customers drop out.
In this talk, we present a new variant of the b-matching problem under uncertain capacity ensuring robustness, called the robust directed b-matching problem. An instance of the problem consists of a directed graph with costs on each arc and uncertain capacities on each vertex. The problem has two stages: first, we set the sum of the matching over all outgoing arcs for each vertex. Second, after realization of the uncertain capacity, we set the b-matching minimizing the costs.
We examine the problem's complexity on different graph classes and show NP-hardness even on directed trees, which are directed acycle graphs whose underlying undirected graph is a tree. We also showcase the influence of requiring a perfect matching, where each vertex has to be matched exactly b(v) times, which turns out to make the problem easier on some graph classes. This is joint work with Arie Koster.
- Lorena Reyes Rubiano (Chair of Business Analytics, RWTH)
Revenue Maximizing Tariff Zone Planning for Public Transport Service CompaniesWe propose a new mathematical approach for designing a counting zone tariff system in public transportation. The goal is to maximize revenue while considering spatial patterns like rings or stripes in the resulting zones. The approach assumes discrete zone prices, trip numbers influenced by prices, and passengers choosing the time-shortest path. We present a Lagrangian approximation method, scaling linearly with the number of stops. Extensive numerical studies evaluate network and demand structures. The approach achieves optimal solutions for instances with 121 stops within a few hours. Instances with over 500 stops can be solved in under an hour. Findings show that revenue increases with network connectivity, spatial patterns may slightly decrease revenue (stripes less than rings), and time-shortest paths mostly always come at minimum cost. A case study of the San Francisco Bay Area with 1,400 stops demonstrates the approach's practicality. Enforcing a stripe pattern results in more balanced zone areas but sacrifices 3-5% of optimal revenue compared to no pattern.
|
2nd Colloquium July 6, 2023
|
Daniel Dadush (CWI Amsterdam)
Interior point methods are not worse than Simplex
Invited Talk
Daniel Dadush (Networks & Optimization group, CWI Amsterdam) "Interior point methods are not worse than Simplex"
Abstract: Whereas interior point methods provide polynomial-time linear programming algorithms, the running time bounds depend on bit-complexity or condition measures that can be unbounded in the problem dimension. This is in contrast with the classical simplex method that always admits an exponential bound. We introduce a new polynomial-time path-following interior point method where the number of iterations also admits a combinatorial upper bound O(2^n n^{1.5} log n) for an n-variable linear program in standard form. This complements previous work by Allamigeon, Benchimol, Gaubert, and Joswig (SIAGA 2018) that exhibited a family of instances where any path-following method must take exponentially many iterations. The number of iterations of our algorithm is at most O(n^{1.5} log n) times the number of segments of any piecewise linear curve in the wide neighbourhood of the central path. In particular, it matches the number of iterations of any path following interior point method up to this polynomial factor. The overall exponential upper bound derives from studying the "max central path", a piecewise-linear curve with the number of pieces bounded by the total length of n shadow vertex simplex paths. This is joint work with Xavier Allamigeon (INRIA / Ecole Polytechnique), Georg Loho (U. Twente), Bento Natura (LSE), Laszlo Vegh (LSE).
Contributed talks
- Murwan Siddig (Chair of Optimization of Distribution Networks, RWTH)
Multistage Stochastic Programming with a Random Number of Stages: Applications in Hurricane Disaster Relief Logistics PlanningWe present a logistics planning problem for prepositioning relief commodities in preparation for an impending hurricane landfall. We assume that the hurricane's intensity, landfall time, and location evolve according to a Markov chain and model the problem as a multistage stochastic programming (MSP) problem with random number of stages. Our work is the first to introduce an MSP model with a random number of stages for the case where the stochastic process is stage-wise dependent. We benchmark the performance of the MSP model with several alternative decision policies, based on two-stage stochastic programming models, including a static policy, a rolling-horizon policy, and a decision tree-based policy. Our numerical results provide key insights into the value of MSP models in hurricane disaster relief logistics planning.
- Katharina Eickhoff (Chair of Management Science, RWTH)
Computation of Walrasian prices with an ascending auction based on matroid partitionDetermining prices on a market where a seller aims to sell a finite set of indivisible distinguishable goods to a finite set of buyers is a well-studied problem with countless applications. Prices on the items are called Walrasian (equilibrium) prices if they admit an envy-free allocation of items to the buyers such that all items with positive prices are sold. Such Walrasian prices are guaranteed to exist if the buyers' valuations are gross substitute, a concept introduced by Kelso and Crawford.
A very natural process to determine prices of the goods is an ascending auction, where prices are raised step by step on overdemanded sets. Provided that all buyers have gross substitute valuations, Gul and Stacchetti have shown that an ascending auction which iteratively raises prices on inclusion-wise minimal maximal overdemanded sets terminates with the (unique) componentwise minimal Walrasian price vector. It is by now known that an inclusion-wise minimal maximal overdemanded set can be computed in polynomial time with tools from discrete convexity. We present a simple and purely combinatorial polynomial time algorithm for their computation: we show that the desired overdemanded sets can be computed using the exchange graph of the classical matroid partitioning algorithm.
Moreover, we show that for gross substitute valuations the component-wise minimal prices that admit an envy-free allocation are the same as the minimal Walrasian prices. This enables us to show a very natural monotonicity of minimal Walrasian prices with respect to changes in supply and demand.
This is joint work with Britta Peis, Niklas Rieken, Laura Vargas Koch and Lázló A. Végh.
- Henri Lotze (Chair of Theoretical Computer Science, RWTH)
New Approaches to Relaxing Online OptimizationAn online algorithm is a special kind of optimization algorithm that is required to take immediate and irrevocable decisions given only a prefix of an instance. In the design of such algorithms, one usually interested in bounding the worst-case ratio of the algorithm's solution in relation to the optimal solution on the same instance, which is commonly called the competitive ratio of an algorithm. In this talk, we explore two novel approaches of modifying the classical online model to better model real-life optimization. The first is that of so-called reservations, in which elements of the instance may be "made offline" for a price. The second model is that of predictions, in which the algorithm is given information about the upcoming instance. This information may however be flawed, up to a constant or multiplicative error. We discuss both models using the Online Simple Knapsack Problem as an example. This is joint work with Hans-Joachim Böckenhauer, Elisabet Burjons, Felix Frei, Matthias Gehnen, Juraj Hromkovic and Peter Rossmanith
|
1st Colloquium May 12, 2023
|
Vera Traub (University Bonn)
Approximation Algorithms for Connectivity Augmentation
Invited Talk
Vera Traub (University Bonn) Approximation Algorithms for Connectivity Augmentation
Abstract: The goal of network design is to construct cheap networks that satisfy certain connectivity requirements. The famous algorithm by Jain [Combinatorica, 2001] provides a 2-approximation for a wide class of these problems. However, even for many very basic special cases nothing better is known. One of the most elementary network design problems is the weighted connectivity augmentation problem, which asks to increase the edge-connectivity of a graph in the cheapest possible way by adding edges from a given set. In this talk we present the first approximation algorithms for weighted connectivity augmentation that beat the longstanding approximation factor of 2. This is joint work with Rico Zenklusen.
Contributed talks (Session 1)
- Giacomo Borghi (Chair of Numerical Analysis, RWTH)
A stochastic particle method for multi-objective optimization
- Vladimir Stadnichuk (Chair of Operations Management, RWTH)
An approximation Algorithm for the Electric Vehicle Scheduling Problem
Contributed talks (Session 2)
- Alexander von Rohr (Institute for Data Science in Mechanical Engineering, RWTH)
Local Search with Bayesian Gradient Estimates
- Stefan Rogosinski (Chair of Data and Business Analytics, RWTH)
Approximation guarantee for the Mixed Logit Assortment Problem based on Revenue-ordered assortments for various distributions
- Yao Zhou (Chair of Information Theory and Data Analytics, RWTH)
Selected applications of convex optimization in future wireless communications
- Eike Cramer (Chair of Process System Engineering, RWTH)
MAiNGO, a combination of deterministic global optimization of MINLP and hybrid mechanistic/data-driven models
- Tim Quatmann (Chair of Software Modeling and Verification, RWTH)
Multi-Objective Model Checking
|