[IEEE-bhpjobs] events at Duke: *Mondays* at 11am,
Walter Heger
heger_walter at hotmail.com
Thu Jan 19 21:15:41 EST 2006
Interesting talks.
---------
Hello,
we are going to kick off the Computational Biology seminars next week!
Note, *change of date* -- this semester, seminars will take place on
*Mondays* at 11am, again in the Schiciano Auditorium in the Fitzpatrick
Center (CIEMAS).
Our first speaker will be Duke's own Alex Hartemink from the CS
department. Hope to see y'all there!
-- Uwe Ohler
----------------------------------------------------------------
"Bayesian machine learning applied to transcriptional regulation,
cancer
diagnosis, neural circuits, and more"
The availability of increasing amounts of high-throughput data means
that
we have an unprecedented opportunity to make breakthroughs in many hard
problems in biology and medicine, from uncovering transcriptional and
neural circuitry to understanding diseases like cancer. However, these
data also require careful computational and statistical treatment
because
we are often reasoning about patterns in very high-dimensional data. I
have been especially interested in two different kinds of machine
learning
tasks with numerous applications in biology and medicine: network
inference
and classification. This talk will present a broad collection of
results
along both of these dimensions. I will omit some of the methodological
details, as they are available in our papers over the years, and
instead
present the basic ideas along with biological and clinical results,
ranging
from regulatory networks in yeast, to cancer diagnosis on the basis of
gene
and protein expression data in humans, to the identification of genes
that
are imprinted in mouse, to neural flow networks in songbirds listening
to
songs. I hope to suggest how machine learning and Bayesian statistics
can
be useful for many kinds of challenging problems in the field.
----
Alexander Hartemink is an Assistant Professor in the Department of
Computer
Science at Duke University. He has been at Duke since September 2001,
when
he received his Ph.D. in Electrical Engineering and Computer Science
from
the Massachusetts Institute of Technology. Prior to that, he received
an
S.M. in Electrical Engineering and Computer Science from MIT in 1997,
an
M.Phil. in Economics from Oxford in 1996, B.S. degrees in Mathematics
and
Physics from Duke in 1994, and an A.B. in Economics from Duke, also in
1994. Hartemink is grateful for the generous support he has received
through the years, including an Alfred P. Sloan Fellowship, an NSF
Faculty
Early Career Development (CAREER) Award, an ORAU Ralph E. Powe Junior
Faculty Enhancement Award, an NSF Graduate Research Fellowship, a
Rhodes
Scholarship, a Barry M. Goldwater Memorial Scholarship, an Angier B.
Duke
Memorial Scholarship, and a Presidential Scholarship from the White
House
Commission on Presidential Scholars. He has recently received funding
through the NIH/NIDCD for his collaborative work with Erich Jarvis on
identifying neural flow networks in songbirds.
--
Uwe Ohler | Assistant Professor, Computational Biology
email: uwe.ohler at duke.edu | Institute for Genome Sciences and Policy
phone: (919) 668-5388 | CIEMAS Bldg, 101 Science Dr, Box 3382
fax: (919) 668-0795 | Duke University
http://www.genome.duke.edu | Durham, NC 27708
-----------------------------------------------------------------------
---------
No-regret algorithms for learning in structured prediction problems
Geoff Gordon
Carnegie Mellon University
LSRC D344
Friday 1/20/06
1:30 - 2:30
Abstract:
Standard machine learning algorithms assume that they have access to a
stream of independent, identically distributed data from which they can
learn their target concept. In many typical applications, though,
i.i.d. data are not available: if I am learning travel times along
roads
in the Pittsburgh area so that I can find the best route to drive to
CMU, I have no reason to believe that events like road work or traffic
jams are independent from day to day. The usual advice for dealing
with
this problem is "ignore it and hope it will go away," and in fact
standard ML algorithms often work well despite the lack of i.i.d. data.
But failures of these algorithms can and do happen. In this talk I
will
consider relaxing the i.i.d. assumption, and describe weaker
replacement
assumptions based on the idea of online minimization of regret. I will
give examples of common ML algorithms that already work under these
weaker assumptions, as well as other ones that don't, and discuss how
to
design new algorithms. I will focus particularly on structured
prediction problems, in which we have a hypothesis space with
interesting internal structure such as the set of paths in a graph.
Bio:
Dr. Gordon is an Associate Research Professor in the Center for
Automated Learning and Discovery at Carnegie Mellon University. He
works on multi-robot systems, statistical machine learning, and
planning in probabilistic and adversarial domains. His previous
appointments include Visiting Professor at the Stanford Computer
Science Department and Principal Scientist at Burning Glass
Technologies in San Diego. Dr. Gordon received his B.A. in Computer
Science from Cornell University in 1991, and his Ph.D. in Computer
Science from Carnegie Mellon University in 1999.
_________________________________________________________________
Express yourself instantly with MSN Messenger! Download today - it's FREE!
http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/
More information about the IEEE-bhpjobs
mailing list