|Program for Research in Computing and Information Sciences and Engineering|
Lecture I – Dr. Bienvenido Vélez
Information Management Research @ UPRM
In this talk I will describe report on some of the information management projects that I lead as part of the Advanced Data Management Group under the auspices of PRECISE. First, I will describe the Inforadar-cl search engine project. Inforadar-cl is novel in two interesting ways; it automatically classifies documents into categories that are generated without human intervention and it is capable of managing multilingual content. Second, I will describe the Mass Transit Passenger Information System (PIS) that we are designing with the Tren Urbano (TU) in mind. The PIS is a web enabled application developed using the Macromedia Flash dynamic content authoring environment. The PIS combines ideas from real time and virtual reality systems with the objective of relieving TU riders from depending on static schedules. The emphasis of this talk will be on the mathematical foundations and the open research issues surrounding each project.
Simulation of Biomedical Fluid Flows
In recent years, the development of computational techniques in fluid dynamics (CFD) together with the dramatically increased computer performance has resulted in significant breakthroughs that promise to revolutionize the field of vascular research. Simulations of specific clinical problems, based on accurate mathematical models and numerical methods will provide a new tool for diagnostics and treatment. In this talk, we discuss our research on domain decomposition methods and dynamic load balancing techniques applied to the simulation of cardiovascular fluid flows.
Lecture III – Dr. Jaime Seguel
FTTs in Computational X-ray Crystallography
Computational X-ray crystallography is central to the determination of the atomic architecture of crystals and proteins. The time required for the computer determination of one such structure is however, very large: approximately “one postdoc year”. Large scale three dimensional discrete. Fourier transforms dominate in these computations. And although fast Fourier transform (FFT) methods are fast, reliable, and well-established, the sheer size of their input data, and the intensity of their use make of FFTs a true bottleneck in large scale computational X-ray crystallography. In this talk, I introduce new Fourier transform methods tailored to protein and crystal structure calculations. These methods are, in general, significantly more efficient than the usual FFT.
TLS IV - Dr. Manuel E. Bermúdez
Modernización del Curso de Compiladores
En el currículo de Computación, el curso de compiladores agoniza. Ha perdido popularidad, y se ofrece cada vez con menor frecuencia. En el currículo de la ACM del 2001, el curso de compiladores ha sido prácticamente descartado. Sin embargo, hay varios temas fundamentales que solamente se cubren en ese curso, tales como análisis sintáctico, traducción en general con aplicaciones fuera del área de compilación, y mecanismos de especificación de lenguajes como por ejemplo las expresiones regulares. En esta conferencia discutiremos nuestro plan para reformar y reorganizar por completo el estudio del tema de traducción por computadora. El enfoque nuevo es completamente distinto al enfoque tradicional que aparece en los libros de texto: se asemeja a una espiral, en la cual los temas se discuten en forma repetida y cada vez en mayor profundidad. Hemos dejado por fuera los temas que ya no son de tanta importancia, como por ejemplo la optimización de código. Discutiremos las ventajas pedagógicas de este enfoque. La nueva organización del curso se presta para discutir ejemplos fuera del área de compilación, como ingeniería de software, minería de datos, y aplicaciones distribuidas sobre la Internet. En ese contexto, presentaremos dos resultados que simplifican enormemente la presentación del tema de análisis sintáctico.
Knowledge Management: An information Nightmare or a Constructivist Dream
Can knowledge be managed? Before we can answer that question we have to define what knowledge is. The definition of knowledge can take multiple approaches, from the naive notion of knowledge – if it is possible to give a naive notion of knowledge– to an epistemological one, where different schools of thought may find proper grounds to initiate endless discussions. Yet, on another corner of the theory, another not less controversial definition may come from Artificial Intelligence, at least from the engineering point of view. In this talk we will attempt to partly conciliate one philosophical view, the constructivist, to other possible views that come from Psychology, Education, Management, Computer and Information Science, and Engineering. This view of knowledge, and we won’t even attempt to define knowledge, seems to fit nicely with what knowledge management is trying to achieve. Knowledge management is an emergent field that probably arouse in response to what has been called the “information explosion” and has brought together multiple sciences and disciplines to capitalize an organization’s knowledge by using novel communication and information technology to store and access information in support of strategies and practices to use and produce knowledge, i.e., to promote innovation. A view will be presented in which we expect that Knowledge Management harness the problem of useful production of and access to information and knowledge resources, hopefully without generating a meta-information explosion. With this expectation, we may find fertile grounds for interdisciplinary research in Computer, Information Science and Engineering with other sciences and disciplines in our attempt to reduce the gap between humans and computers in the quest for higher productivity in the “knowledge era”.
Feature Selection for Supervised Classification
In several fields such as microarray analysis and computer vision is very common to find datasets with thousands of features. In that case supervised classification (classification where the classes are known) of objects becomes a very heavy computational task unless the dimensionality is reduced The dimensionality problem in machine learning and/or statistical modeling still remains as a very important problem to be investigated in spite of the fact that computers are becoming more powerful everyday since it is more convenient to deal with few features (low complexity models) in order to perform a classification. Besides the saving in computing time, it is easier to interpret a model with few features. There are several approaches to carry out feature selection in supervised classification. They depend on the way that the subsets are generated and on the function used to evaluate the subset under examination. Combining the two criteria there are about 10 nonempty groups of feature selection procedures and there are 5 more groups that still remain empties. On the other hand a feature selection procedure is a filter it does not depend on a classifier and a wrapper if it needs a classifier to perform the selection. In this talk we will compare the performance of wrappers and filters methods in several aspects and we will introduce a new method called FINCO (that makes nonempty one of the 5 groups above mentioned). We will also talk about future work to be done.
Lecture VII – Dr. Domingo Rodríguez
TLS VIII - Dr. Sanjit K. Mitra
Structural Subband Decomposition: A New Concept in Digital Signal Processing
Polyphase decomposition of a sequence was advanced to develop computationally efficient interpolators and decimators, and has also been used to design computationally efficient quadrature-mirror filter banks. The polyphase decomposition represents a sequence into a set of sub-sequences, called polyphase components. However, the polyphase components do not exhibit any spectral separation. In this talk, we first review the concept of structural Subband decomposition, a generalization of the polyphase decomposition, which decomposes a sequence into a set of sub-sequences with some spectral separation that can be exploited advantageously in many digital signal processing applications. We then outline some of the applications of the structural Subband decomposition, such as, efficient design and implementation of FIR digital filters, development of computationally efficient decimators and interpolators, Subband adaptive filtering, and fast computation of discrete transforms.
Lecture SPECIAL EDITION - Dr. Manuel Bermúdez
Paradigmas y Perspectivas Futuras en Computación
La noción de paradigma, que en el pasado fue tema exclusivo para filósofos, hoy día se reconoce como tema de importancia en toda clase de áreas, desde la ciencia hasta el mundo de los negocios. En esta conferencia discutiremos el papel que juegan los paradigmas en el área de computación. Daremos varias definiciones del termino "paradigma", discutiremos el concepto de ceguera paradigmática, y presentaremos varios ejemplos que resultan interesantes y divertidos. Utilizaremos el concepto de paradigma para establecer algunas perspectivas futuras en computación, incluyendo la revolución en las comunicaciones, y la ingeniería de sistemas.
Overview of Computational Biology
provides an overview of the various aspects of computational biology -
using “scale" as the discriminator between the various computational
aspects. This lecture will focus primarily on the computationally demanding
and/or data intensive disciplines. A number of animations will be shown
to highlight points.