---0---

Conference Schedule

---0---

Friday, October 19

                           8:15  -  8:45    Registartion: Blocker 1st floor
                           8:45  -  9:00    Opening remarks
                           9:00  -  9:50    Steve Smale: Vision and learning
                          10:00 - 10:30   Coffee break: Blocker 112
                          10:30 - 11:10   Stephane Boucheron: A poor man's Wilks phenomenon
                          11:20 - 12:00   Alessandro Verri: Regularization Algorithms for Learning


                           2:00 - 2:50  Patrick Wolfe: The Nystrom Extension and Spectral Methods in Learning
                           3:00 - 3:30  Coffee break: Blocker 112
                           3:30 - 4:10  Ingo Steinwart: Approximation Theoretical Questions for Support Vector Machine
                           4:20 - 5:00  Vladimir Temlyakov: Universality and Lebesgue inequalities in approximation and estimation


---0---

Saturday, October 20

                             9:00   -  9:50  Ingrid Daubechies: Convergence results and counterexamples for AdABoost
                                                                                    and related algorithms
                             10:00 - 10:30  Coffee break: Blocker 112
                             10:30 - 11:10  Nira Dyn: Two algorithms for adaptive approximation of bivariate functions         
                                                                       by piecewise linear polynomials on triangulations
                             11:20 - 12:00  Vladimir Koltchinskii: Sparse Recovery Problems in Learning Theory
 

                              2:00 - 2:50   Dominique Picard: A 'Frame-work'  in Learning Theory
                              3:00 - 3:30   Coffee break: Blocker 112                                  
                              3:30 - 4:10   Gilles Blanchard:   Resampling-based confidence regions in high dimension
                                                                                   from a non-asymptotic point of view
                              4:20 - 5:00   Ding-Xuan Zhou:   Learnability of Gaussians with Flexible Variances
                           
---0---

Sunday, October 21

                            9:00   -  9:50    Albert Cohen:  Matching vs. basis pursuit for approximation and learning: a comparison
                            10:00 - 10:30   Coffee break: Blocker 112
                            10:30 - 11:10   Christoph Schwab: Elliptic PDEs with random field input -- numerical analysis
                                                                                     of forward solvers and of goal oriented input learning
                            11:20 - 12:00    Lee Jones:  Finite sample minimax estimation, fusion in machine learning,
                                                                        
and overcoming the curse of dimensionality
                            1:00 - 1:50     Tomaso Poggio: Learning: neuroscience and engineering applications
                            2:00 - 2:40     Maya Gupta: Functional Bregman Divergence, Bayesian Estimation of Distributions
                                                                         and Completely Lazy Classifiers
               
                            
---0---