Seminar Topics
Seminar Topics
Students are advised to submit similar topics....The list of topics given below is only an indcation. The dead line for submission of seminar topic is 14th August 2007
TITLE: Instruction-Based Self-Testing of Modern Processors
ABSTRACT: Aggressive microprocessor design methodologies using giga-hertz clock and very deep sub-micron technology implementation are necessitating the use of at-speed testing of small and distributed timing defects caused by process variations during manufacturing. A primary objective of such tests is to test for the faults that can lead to performance degradation, such as delay faults. However, at-speed test using external tester is not an economical viable scheme and hardware BIST leads to unacceptable performance penalty and area overhead. A new paradigm, instruction-based self-testing, can alleviate these problems as it uses processor instruction to deliver test patterns and collect the test responses. Also, it has the ability to link to low level fault models. Being inherently non-intrusive approach, it is well suited for testing processors, embedded cores, IP cores, and SoCs. Moreover, the same test can also be used for online periodic testing of processors to improve the reliability in the field. We have proposed an instruction-based self-testing methodology for delay fault testing of modern processors in a chronological way by first dealing with non-pipelined processors, and then pipelined processors and superscalar architectures. In order to test a processor, a graph theoretic model based on the instruction set architecture and RTL description is developed. This model, in conjunction with the structural and functional information is used to identify and classify all paths into functionally testable and untestable paths and generates test instruction sequences that can be applied in functional mode of operation.
TITLE: State of the Art CMOS Devices
ABSTRACT: The talk will focus on the current CMOS transistor design and technology trends for the 65nm technology and beyond. Scaling factors, parasitics and device design are used to set up an initial technology expectation. Device performance, speed and product yield and reliability considerations are among the very key factors that are all needed for successful launch of any product from conception to the ramp of a technology. Current approaches for meeting transistor performance requirements that are essential to stay competitive will be discussed. Brief overview of new device architectures will be provided that will be needed for the feasibility of future technologies.
TITLE: Design of RF-CMOS Integrated Circuits for Wireless Communication
ABSTRACT: The continuous progress of silicon technology has enabled the emergence of digital mobile broadband communication systems for voice, data, multimedia and position with good quality of service. Data-rate and mobility trade-offs, different standards like 2G, 3G, Bluetooth, WLAN, GPS and digital multimedia broadcasting are leading to multimode requirements and issues relating to coexistence and inter-working of these different technologies must be solved. Single chip integration with digital part, high integration density and excellent RF performance, low power consumption and low cost under mass production aspects are further requirements. First system-on-chip (SoC) demonstrations show that today CMOS technologies seem to be able to fulfil all these requirements. This lecture will review current RF-CMOS technologies, RF-architectures and re-configurability principles as well as circuit and system design aspects for mobile communication applications. It will consider special requirements on wafer processes like leakage and analogue and RF capabilities and will look to the world of system-level design. In this context, power-levels, form-factors and cost are key requirements for system-in-package and system-on-chip solutions. Of course, new challenges for the future will be considered and explored, too.
TITLE: Speaker Diarization : Approaches and Improvements
ABSTRACT: Audio diarization is the task of automatically segmenting an input audio stream into acoustically homogeneous segments and attributing them to sources. In general, these sources can include particular speakers, music, background noise sources and other source/channel characteristics. In the NIST Rich Transcription evaluations within the DARPA EARS program, the task was limited to speaker diarization; namely providing a list of 'who spoke when?' throughout some audio data. Speaker diarization has many applications such as enabling speakers to be tracked through debates, allowing speaker-based indexing of databases, aiding speaker adaptation in speech recognition and improving readability of automatic transcripts. This talks describe the speaker diarization task, the speaker diarization system developed at Cambridge University including recent improvements which enable it to perform comparably to other state-of-the-art diarization systems on Rich Transcription 2004 diarization evaluation data.
TITLE: Evolving Role of Schottky Contacts from Traditional to Molecular/CNT Devices
ABSTRACT: Traditionally, the Schottky contact has been mainly characterized by the difference of the work-functions of the contacting metal and semiconductor. However recent studies of molecular and carbon nanotube transistors seem to indicate that additional factors like metal electrode atomic species, chemical properties and contact geometry influence the nature of the Schottky contact. More importantly, the electrical characteristics of the channel itself seem to be affected by the Schottky contact. Based on the results of some recent ab initio calculations carried out both in molecular and carbon nanotube devices, the evolving role of the Schottky contacts in traditional and molecular/CNT devices is discussed.
TITLE: Next Generation Wireless Systems: A VLSI design Perspective
ABSTRACT: The convergence of voice, data, and video applications has created growing demand for data-centric wireless connectivity. This has fueled tremendous research in the communication theory community. The development of complex signal processing algorithms and techniques such as multi-input multi-output (MIMO) systems, OFDM, iterative error correction coding in the last decade has potential to provide high throughput and good reliable wireless systems. The traditional approaches using digital signal processors are no longer good enough for these advanced techniques. To realize the potential of these advanced techniques in next generation wireless systems (beyond 3G), there is need to work at the optimization of VLSI architectures with reduced complexity at lower power consumption. This can be realized with joint optimization of both VLSI architecture and the algorithms. This talk will focus on joint optimization using the example of MIMO receiver design. This talk will be concluded with a presentation of design flow starting from algorithm till silicon.
TITLE: Challenges in Advanced Technologies and Enablement
ABSTRACT: IBM offers advanced SOI and CMOS technologies for servers and digital foundry in and has 45nm and 32nm in development. The CMOS performance roadmap has significant performance issues which are being resolved not by conventional scaling but by innovation. RF/AMS technologies in RFCMOS and SiGe BiCMOS are also developed. RFCMOS will have significant design challenged at advanced nodes. Some of these challenges will be discussed. SiGe BiCMOS is reaching very high performance levels of 300 and 400 Ghz. Scaling issues in SiGe BiCMOS will also be discussed. These technologies are only useful if there are good models and process design kits which address the SOI, CMOS, RFCMOS, and SiGe BICMOS technologies issues. Some key attributes of advanced models and design kits will be discussed.
TITLE: Copper/Low k Interconnect System and the Damascene Process in IC Fabrication
ABSTRACT: About 15 years back, the IC fabrication industry migrated from Aluminum/SiO2 interconnect system to copper/low k system to reduce signal propagation delay (RC) in circuits. Since then, several material and process advances have been made worldwide, which are discussed in this brief presentation. Some of the highlights are: damascene etching to enable formation of copper wires, electrolytic deposition process for copper, thin film deposition methods for high aspect ratio features, and chemical mechanical planarization to enable building upwards of nine levels of copper wiring.
TITLE: The GMRT - Challenges in Electronics and Signal Processing
ABSTRACT: The radio window is an integral part of the astronomer's use of the electromagnetic spectrum to study the Universe. This talk will highlight the role of the Giant Metrewave Radio Telescope (GMRT) as a world class facility for front-line work in Radio Astronomy in the frequency range of 50 MHz to 1420 MHz, with the emphasis on the technical aspects of the telescope. We will see how the GMRT, consisting of 30 fully steerable 45 metre diameter antennas located in a 20 km region, can be used as an aperture-synthesis array to produce maps of the radio brightness of extended sources, as well as a phased array with a highly directive beam to study compact radio sources like pulsars. The main features of the receiver chain will be described. Each antenna is equipped with multi-frequency feeds and a low noise, high gain heterodyne receiver system, the signal from which is transmitted to the central station using optical fibres. At the central station, the multi-purpose back end receivers to process and combine the signals from the 30 antenna stations include (i) a 256 spectral channel correlator (to produce the cross spectra from each pair of antenna signals) and (ii) a phased array combiner followed by a high time resolution pulsar receiver. We will see that the sophisticated electronics is backed up with modern computing facilities, control and analysis software, and signal processing techniques that exploit the full capability and versatility of the GMRT. Some of the new results and discoveries made with the GMRT will also be highlighted. Plans to significantly improve the capabilities of the GMRT are currently underway, and we will look at the challenging opportunities in the development of state of the art electronics and signal processing techniques that this offers.
TITLE: The GMRT - Challenges in Electronics and Signal Processing
ABSTRACT: The radio window is an integral part of the astronomer's use of the electromagnetic spectrum to study the Universe. This talk will highlight the role of the Giant Metrewave Radio Telescope (GMRT) as a world class facility for front-line work in Radio Astronomy in the frequency range of 50 MHz to 1420 MHz, with the emphasis on the technical aspects of the telescope. We will see how the GMRT, consisting of 30 fully steerable 45 metre diameter antennas located in a 20 km region, can be used as an aperture-synthesis array to produce maps of the radio brightness of extended sources, as well as a phased array with a highly directive beam to study compact radio sources like pulsars. The main features of the receiver chain will be described. Each antenna is equipped with multi-frequency feeds and a low noise, high gain heterodyne receiver system, the signal from which is transmitted to the central station using optical fibres. At the central station, the multi-purpose back end receivers to process and combine the signals from the 30 antenna stations include (i) a 256 spectral channel correlator (to produce the cross spectra from each pair of antenna signals) and (ii) a phased array combiner followed by a high time resolution pulsar receiver. We will see that the sophisticated electronics is backed up with modern computing facilities, control and analysis software, and signal processing techniques that exploit the full capability and versatility of the GMRT. Some of the new results and discoveries made with the GMRT will also be highlighted. Plans to significantly improve the capabilities of the GMRT are currently underway, and we will look at the challenging opportunities in the development of state of the art electronics and signal processing techniques that this offers.
TITLE: Fuzzy-rough approach to pattern classification
ABSTRACT: The primary objective of any supervised function approximator is to learn an unknown function (or at least a good approximation of it) from a set of observed input-output patterns. Pattern classification is a special case of function approximation, where each pattern is assigned to a particular class, i.e., the output in classification problem is one of the discrete values corresponding to class rather than real-valued function.
The present research proposes a fuzzy-rough approach to pattern classification, and develops some hybrid algorithms and optimization techniques for attribute selection and induction of fuzzy decision trees. The major contributions of the research are: formulation of hybrid fuzzy-rough measures and their analysis from a pattern classification view point, incorporation of these measures for the development of attribute selection and novel fuzzy-rough decision tree induction algorithms, the development of neural-like parameter adaptation strategies in the framework of neuro-fuzzy decision trees, and the methodology for the structure and initial parameter identification of a generalized class of Gaussian RBF networks based on fuzzy decision trees. The proposed algorithms have been stated explicitly in the formal notation and in pseudocode format. Extensive computational experiments have been reported and the proposed algorithms have been experimentally compared with well-known algorithms available in the literature using real-world standard datasets.
TITLE: Controller Architectures for Distributed Control Synthesis and Performance Optimization
ABSTRACT: Recently there has been some work done in designing controllers with structure in convex manner by using controller parameterization in terms of Youla parameter. During the implementation of these distributed controllers, it becomes critical to consider the effect of sub-controller to sub-controller communication noise on the stability and performance, and the bound on sub-controller to sub-controller communication signal power. When these two quantities are incorporated into the performance problem for obtaining optimal distributed controller with the structure, the performance function in general becomes non-convex even in the parameterized form. Thus, it becomes important to find architectures which provide convex performance maps. In terms of these architectures, we are able to provide a class of distributed controllers in which we can search for an optimal controller in a convex manner. We have developed two architectures for controller without any structure for its distributed implementation.
TITLE: Introduction to RF front end
ABSTRACT: In present world of extremely competitive silicon industry, wireless market is still growing. Cellular phones with camera, terrestrial TV, GPS have already arrived in market. Addition of all these at least extra cost and least extra power/area poses major design challenge for the engineers. Global Positioning System is a location-based navigation system consisting of 24 satellites which help you locate one's position, till sub-meters accuracy range. Digital Television, watching television on your cell-phone while on move, has also already become a reality. Such RF communications implementations can be categories in two broad category - RF frontend, receiving/transmitting the signal, mixing with the carrier and Digital Baseband, the digital signal processor.
RF frontend, a receiver, consists of LNA (Low Noise Amplifier) followed by Mixer, Filter, Amplifier and then ADC and the converted digital signal is sent to the baseband processor. For a transmitter, the order is reversed with LNA replaced by Power Amplifier.
Various RF communications standards at close-by carrier frequencies, low strength of the signals, low power etc. Factors make the field of RF design quite challenging and interesting.
TITLE: Voltage Collapse Contingency Screening
ABSTRACT: Beyond the August 2003 northeast blackout, the industry has witnessed multiple serious outages all over the world. It is necessary to treat these blackouts as impetus towards enhancement of security level at which these systems are operated. As a framework of security analysis it is important to investigate the influence of potential faults on power system in advance to determine which contingencies may lead to blackout or system instability by possible cascading outage. The ultimate objective in this process is to derive guidelines for defining the areas of secure operations. The objective of this talk is to present a novel and fast technique for branch and generator contingency ranking. Contingency ranking has been done by estimating the post-contingency voltage collapse point (CP), given a power system operating point, a load demand forecast and a generation dispatch. The new algorithm presented in this talk, provides more accurate post-contingency "distance to collapse" estimate. The sdistinguished features are ability to directly estimate the post-contingency CP, consideration of breaking point, capability to handle multi-terminal or multi-generator contingency, able to deal with islanding contingency, consideration of VAR limit of generators and to rank contingencies quickly and accurately for a very large power system. Proposed method has been tested for selected 1251 multi-terminal and 97 multi-generator contingencies of 19140 bus system for two possible particular transfer case. Results obtained from new algorithm were compared with results obtained from full continuation power flow and it shows that proposed algorithm is very robust, efficient, accurate and fast. This talk will also present new algorithm of contingency ranking based on estimate of "distance to collapse", contingency scope index, and load loss index to handle local voltage collapse problem. Concept of contingency scope index (CSI) and contingency scope spectrum has been introduced, which is good indicator of how widely contingency affects the system. CSI has been calculated based on eigen-value analysis and comparing the right eigenvector in normal case and contingency case. Developed algorithms provides true ranking of contingency, by putting contingency with local voltage problem down in the list of critical contingencies. Proposed method has been tested for selected 6689 branch contingencies of 3493 bus system for two possible particular transfer case with promising results.
awards and listed in Who's Who in America and Who's Who in professional in research and science. He is author of several papers, pending patent and serves as reviewer for IEEE transaction on power system, international journals and conferences.
I
TITLE: Instruction-Based Self-Testing of Modern Processors
ABSTRACT: Aggressive microprocessor design methodologies using giga-hertz clock and very deep sub-micron technology implementation are necessitating the use of at-speed testing of small and distributed timing defects caused by process variations during manufacturing. A primary objective of such tests is to test for the faults that can lead to performance degradation, such as delay faults. However, at-speed test using external tester is not an economical viable scheme and hardware BIST leads to unacceptable performance penalty and area overhead. A new paradigm, instruction-based self-testing, can alleviate these problems as it uses processor instruction to deliver test patterns and collect the test responses. Also, it has the ability to link to low level fault models. Being inherently non-intrusive approach, it is well suited for testing processors, embedded cores, IP cores, and SoCs. Moreover, the same test can also be used for online periodic testing of processors to improve the reliability in the field. We have proposed an instruction-based self-testing methodology for delay fault testing of modern processors in a chronological way by first dealing with non-pipelined processors, and then pipelined processors and superscalar architectures. In order to test a processor, a graph theoretic model based on the instruction set architecture and RTL description is developed. This model, in conjunction with the structural and functional information is used to identify and classify all paths into functionally testable and untestable paths and generates test instruction sequences that can be applied in functional mode of operation.
TITLE: State of the Art CMOS Devices ABSTRACT: The talk will focus on the current CMOS transistor design and technology trends for the 65nm technology and beyond. Scaling factors, parasitics and device design are used to set up an initial technology expectation. Device performance, speed and product yield and reliability considerations are among the very key factors that are all needed for successful launch of any product from conception to the ramp of a technology. Current approaches for meeting transistor performance requirements that are essential to stay competitive will be discussed. Brief overview of new device architectures will be provided that will be needed for the feasibility of future technologies.
TITLE: Wide Area Control for Large Power Systems
ABSTRACT: New technologies in synchronized telemetering together with low-cost wide-area communication networks have opened up new advanced aplications in power system monitoring and control. Synchrophasor installations that allow fast assessment of the dynamic state of the large interconnected power system are emerging in power systems all over the world. The seminar will discuss recent work at Washington State University on advanced wide-area monitoring and control applications which are being implemented in prototype versions in North American power grid. The tools are meant to detect system instability while it is still emerging. Whenever needed, the controller is designed to initiate suitable control actions to mitigate the instability before it can result in a cascading blackout of the large power grid.
TITLE: Enhancing Brain-Computer Interface Algorithms and Feedback: Implications for Use in Post-stroke Rehabilitation
ABSTRACT: An electroencephalograph (EEG) based brain-computer interface (BCI) involves feature extraction from electrophysiological brain signals recorded non-invasively from the scalp of the subject while she/he is performing predefined mental tasks (e.g. motor imagery). A classifier then assigns the relevant EEG features to the corresponding mental tasks and the resultant classification is normally provided as visual feedback to the subject. It thus offers a non-muscular communication medium, critically needed by people with severe neuro-muscular disability, such as MND sufferers. It has been established that the systematic mental practice (MP) of therapeutic exercises is nearly as effective as actually performing those exercises for post-stroke rehabilitation. However, it is difficult to motivate stroke survivors to undertake focused MP on a regular basis, particularly for arm rehabilitation. An MP of therapeutic exercises can play a role of a mental task in a BCI. Thus, this may render BCI suitable to provide on-line neurofeedback to help patients undertake more effective MP for the purposes of post-stroke rehabilitation. However, despite tremendous progress made in devising improved BCI technology over the last decade, it is still difficult to effectively account for non-stationary variability in stochastic EEG data due to varying brain dynamics and measurement noise. As a result, BCI systems lack sufficient robustness for constant practical use.
In order to examine the effects of non-stationarity in EEG, we have undertaken an extensive comparative evaluation of various spectral approaches to feature extraction to find a method that provides the best and most consistent feature separability. It resulted in the selection of power spectral density (PSD) approach for feature extraction. Additionally, a novel type-2 fuzzy logic (T2FL) based classifier design approach has been investigated to account for non-stationary variation at the feature classification stage. The T2FL classifier provides significantly better classification accuracy compared to state-of-the-art classifiers such as support vector machines (SVMs) and linear discriminant analysis (LDA). This is because the T2FL can learn the range of variability in the feature distribution from the training session and effectively handle changes in this distribution observed in the subsequent recording sessions within the so-called foot of uncertainty. In order to assess the effect of feedback on the subject's performance, tests were conducted on six healthy subjects over nine sessions involving ball clenching by left or right hand as mental tasks, using two types of feedback: a simple cue-based paradigm and a game-like basket paradigm. In the cue-based paradigm arrows were displayed to indicate left or right actions and the basket paradigm involved playing a game of horizontally manoeuvring a ball to a basket placed randomly in the bottom left or bottom right side of the screen. The majority of subjects showed improvement in performance with basket paradigm, which demonstrates that an appropriately designed feedback may provide enhanced motivation for undertaking motor imagery (or MP) exercises. This has obvious positive implications for using BCI to provide neurofeedback and thus facilitate more effective post-stroke arm rehabilitation through MP of therapeutic exercises.
WLAN Security - The Science & Engineering
Abstract:
This talk is in the nature of a tutorial on wireless LAN security vulnerabilities and the role of a wireless intrusion detection/prevention system. I will also talk a little about some of the challenges in converting the techniques into a robust, enterprise grade product.
Power Aware High Performance Microprocessor Design Challenges
Abstract:
With the increasingly large number of transistors on modern day microprocessors, transistor real estate is becoming cheaper and feature sets are growing larger. The ability to increase the performance of these machines is becoming almost independent of the underlying instruction set architecture and the focus is increasingly in the core and memory architecture and their power aware implementations in deep sub micron technologies. In this presentation, we will discuss the instruction set consolidation and the challenges in implementing these complex designs in sub 65nm processor technologies.
Title:Technology Progression and Convergence: Storage and Digital Media
Abstract:
As technology is evolving, the focus has shifted from storage on the local hard disk to Storage on the Network. There are several underlying technologies that go in to make such storage possible. The industry has gone a step further and based on Networked Storage, solutions for Information Life Cycle Management, Disaster Recovery and High Availability are evolving. Digital Media- Media Solutions are associated with High Storage Requirements and the two technology streams have evolved with each other. The talk will focus on these technologies and how related areas are fast converging. I will also talk about the work that TCS is doing in these and related areas.
TITLE:Sensor Networking and Data Management for Pervasive Healthcare
Abstract:
Wearable (in-vitro) as well as in-vivo wireless smart BioMEMS Sensors (Biosensors) are expected to revolutionize healthcare by enabling pervasive and real-time monitoring of physiological quantities (e.g. blood glucose level) , automated drug delivery (e.g. of insulin) and novel prosthetics (e.g. artificial retina). Products such as SmartPill and Wireless EKG monitors are first steps in this direction. However, the use of networked biosensors for pervasive healthcare presents numerous challenges for networking and data management since such techniques should be safe and dependable. To assist in the development and experimentation of networking and data management techniques that are suitable for healthcare we are developing Ayushman-a testbed for sensor network based health monitoring infrastructure that can be deployed in scenarios ranging from home-based care to disaster relief. Some of the essential services provided by Ayushman include: 1) Real-time and long-term gathering, aggregation and querying of diverse patient data such as EKG and blood-pressure; 2) Localization of patients, doctors, and equipment; and 3) Context-based medical services and access control. This talk describes the requirements and challenges for sensor networking and data management for health care applications, the design and our current prototype of Ayushman, and some of the novel solutions we have developed towards meeting the goal of pervasive healthcare using biosensors.
Recognizing Unsegmented Text by Symbolic Indirect Correlation
Abstract:
I will describe a non-parametric approach to whole-word recognition that is being pursued in collaboration with Professors George Nagy (RPI) and Shashank Mehta (IITK). The approach builds two bipartite graphs that result from feature-level and lexical comparisons of the same word against a reference string which need not include the query word. The lexical graph preserves the relative order of edges in the feature graph corresponding to correctly recognized features. This observation leads to a subgraph-matching formulation of the recognition problem. Unlike parametric methods, such as hidden-Markov models (HMMs), the proposed approach does not require extensive training to estimate the model parameters.
Sensor Networking and Data Management for Pervasive Healthcare
Abstract:
Wearable (in-vitro) as well as in-vivo wireless smart BioMEMS Sensors (Biosensors) are expected to revolutionize healthcare by enabling pervasive and real-time monitoring of physiological quantities (e.g. blood glucose level) , automated drug delivery (e.g. of insulin) and novel prosthetics (e.g. artificial retina). Products such as SmartPill and Wireless EKG monitors are first steps in this direction. However, the use of networked biosensors for pervasive healthcare presents numerous challenges for networking and data management since such techniques should be safe and dependable. To assist in the development and experimentation of networking and data management techniques that are suitable for healthcare we are developing Ayushman-a testbed for sensor network based health monitoring infrastructure that can be deployed in scenarios ranging from home-based care to disaster relief. Some of the essential services provided by Ayushman include: 1) Real-time and long-term gathering, aggregation and querying of diverse patient data such as EKG and blood-pressure; 2) Localization of patients, doctors, and equipment; and 3) Context-based medical services and access control. This talk describes the requirements and challenges for sensor networking and data management for health care applications, the design and our current prototype of Ayushman, and some of the novel solutions we have developed towards meeting the goal of pervasive healthcare using biosensors.
The Art of Wireless Security
Abstract:
Aside from covering several technical problems in the area of wireless security, I will show samples of artwork created by AirTight engineering team using SpectraGuard RF propagation modeling tool.
Online Association Policies in IEEE 802.11 WLANs
Abstract:
In many IEEE 802.11 WLAN deployments, wireless clients have a choice of access points to connect to. In current systems, clients associate with the access point with the strongest signal to noise ratio. However, such an association mechanism can lead to unequal load sharing resulting in diminished system performance. In this talk, we describe an analytical approach based on stochastic dynamic programming to find the optimal client-AP association algorithm. For a simple topology consisting of two access points, we numerically determine the optimal association rule. By studying the nature of the optimal rule, we propose a near-optimal heuristic and study its efficacy for more complicated arrival patterns and larger topologies. We then study the stability of different association policies as a function of the spatial distribution of arriving clients. We find for each policy the range of client arrival rates for which the system is stable. For small networks, we use Lyapunov function methods to formally establish the stability or instability of certain policies in specific scenarios. Our heuristic policy is shown to have very good stability properties when compared to several other natural policies. We also validate our analytical results by detailed simulation employing the IEEE 802.11 MAC and provide pointers on implementation.
An overview of BlueGene/L Supercomputer
Abstract:
Blue Gene is an IBM Research project dedicated to exploring the frontiers in supercomputing: in computer architecture, in the software required to program and control massively parallel systems, and in the use of computation to advance our understanding of important biological processes such as protein folding.
The full Blue Gene/L machine is being built with the Department of Energy's NNSA/Lawrence Livermore National Laboratory in California, and will have a peak speed of 360 Teraflops and is expected to be delivered by the end of 2005. In November 2004, Blue Gene/L occupies the #1 position in the TOP500 supercomputer list by giving a performance of 70.72 Terraflops on the linpack benchmark. A growing list of applications including hydrodynamics, quantum chemistry, molecular dynamics, climate modeling and financial modeling are being run on BlueGene/L now.
In this talk I will describe the architecture of the BlueGene/L machine and the different decisions made in its design that enabled it to achieve the top spot in the supercomputing list. I will also describe some of the research being carried at IBM India Research Lab in this area.
BlueGene/L has been a six year joint research project between IBM and Lawrence Livermore national labs
Adavances in VLSI Designs for Testability
Abstract
The main purpose of the test process, as it is applied to the manufacturing of semiconductor devices is to provide a measure of the quality and/or reliability of a finished semiconductor product. The purpose for Design-for-test (DFT) is to place "hardware hooks" on the die to enable conducting the quality-reliability measurement. If done correctly DFT will:
Enable the quality goals to be met with a high degree of confidence (fault coverage) during testing.
Allow the coverage measurement to be done efficiently and economically to meet the cost-of-test goals.
Enable some form of test vector automation, such as ATPG.
The talk aims at covering the fundamental philosophy of test, the different types of fault models used by the software tools and the Automatic Test Pattern Generation (ATPG) process. It will also mention about the ATPG algorithms and fault simulation. The popular structured test methodologies such as Full-scan, Built-In-Self-Test (BIST) for memory and logic, Boundary Scan, etc., will be discussed. Finally, some commercially available test tools, and a few recent announcements from some of the DFT tool vendors will be mentioned.
Emerging Threats in WLANs and Security Systems to Tackle Them
Abstract
While WLANs are making it convenient for end users to be net-connected, it is also making it easier for intruders to connect to your network. This talk is about the emerging WLAN intrusion threats not solved by new security standards such as 802.11i. New intrusion detection and prevention systems are coming up to address these threats. We will discuss key desirable elements of such systems and technical details of some of them. The talk will be followed by proof of concept demonstration of one of AirTight Network's intrusion detection and prevention product.
ROUTER DESIGN AND METRO ETHERNET
Abstract
Router design has gone through a kind of an evolution cycle since mid-eighties. We will talk in detail about the this paradigm change besides covering basic architecture of today's router. In the second half of the talk, we will cover Metro-ethernet, an upcoming choice for cusotmer-UNI.
Voice Activated Multimedia Movies over the Internet
Abstract
In the recent years, Internet has become a resource for active communication and active repository of various types of knowledge bases. As the Internet is getting popular, the demand for cross cultural exchanges, archived multimedia movies, archived news clips and children stories and educational stories has increased tremendously. People are retrieving the archived information, modifying them, and archiving or transmitting newly created information for future use. PDAs are becoming new portals for retrieving remotely this continuously modified knowledge base. Based upon this paradigm, many researches for Internet based languages and international standards for multimedia transmissions are fast emerging. . Many new multimedia formats are being developed for multimedia communication using XML as the intermediate language. Recent W3C standard MPEG-7 is based upon XML. However, the growth of the demand is much faster than the growth of the bandwidth. XML has become an international standard for low level structured data transmission.
In this research, I describe our effort to transmit multimedia movies over the Internet with reduced bandwidth requirement while retaining the needed QoS (Quality of Service). We have developed a STMD (Single Transmission Multiple Display Paradigm) that exploits representation of object components as hierarchical graphs, transmits components and hierarchical graph over the Internet, and reconstructs the image at the Client end. The model uses STMD and static analysis of archived movies to reduce the data transmission requirement. The lack of memory in PDAs has been solved using server directed buffer management. The model has been extended to transmit 3D object based movies, and interactively change them using a new concept of ^ voice activated dynamic XML^ where multimedia representation using XML is modified using voice activated commands to create new movies interactively.
A Novel Approach to Facilitate Bio-Informatics Information Integration
Abstract
The essential heterogeneity problem, also known as the semantic heterogeneity problem, becomes increasingly prominent as information sources expand rapidly in all kinds of subject domains, especially in the bio-informatics area which often deals with multi-terabyte datasets from various distributed sources. Traditionally, researchers use standards- or mediation-based methods to integrate heterogeneous information. This talk presents a novel approach to mitigate the essential heterogeneity for bio-informatics data sources. The approach is based on the proposition that, by monitoring, extracting, clustering, and visualizing bio-informatics metadata across disparately created data sources, patterns of practice can be identified and the definition of standards requirements can be facilitated, thereby promoting homogeneity of data sources. To instantiate the approach, a research architecture, microSEEDS, and its implementation and envisioned uses are discussed.
On Recognizing Group of Human beings in Images
Abstract
One of the challenging problems in Vision is recognition of human beings as it has lot of applications from surveillance to industry and defense. In the talk, the problems of human being recognition will be dealt with distance transformations mostly Hausdorff and Chamfering methods. Comments on implementation on Cray T3ESupercomputer will also be discussed. At the end some navigation strategies will also be discussed.
Biometric Authentication: How Do I Know Who You Are
Abstract
A wide variety of systems require reliable personal recognition schemes to either confirm or determine the identity of an individual requesting their services. The purpose of such schemes is to ensure that only a legitimate user, and not anyone else, accesses the rendered services. Examples of such applications include secure access to buildings, computer systems, laptops, cellular phones and ATMs. Biometric recognition, or simply biometrics, refers to the automatic recognition of individuals based on their physiological and/or behavioral characteristics. By using biometrics it is possible to confirm or establish an individual's identity based on 'who she is', rather than by 'what she possesses' (e.g., an ID card) or 'what she remembers' (e.g., a password). Current biometric systems make use of fingerprints, hand geometry, iris, face, voice, etc., to establish a person's identity. Biometric systems also introduce an aspect of user convenience. For example, they alleviate the need for a user to 'remember' multiple passwords associated with different applications. A biometric system that uses a single biometric trait for recognition has to contend with problems related to non-universality of the trait, spoof attacks, limited degrees of freedom, large intra-class variability, and noisy data. Some of these problems can be addressed by integrating the evidence presented by multiple biometric traits of a user (e.g., face and iris). Such systems, known as multimodal biometric systems, demonstrate substantial improvement in recognition performance. In this talk, we will present various applications of biometrics, challenges associated in designing biometric systems, various fusion strategies available to implement a multimodal biometric system and issues related to securing the template and data encryption using biometric information.
A Fast Heuristic for FPGA Placement
Abstract
Field-Programmable Gate Arrays (FPGAs) are semiconductor chips that implement digital circuits by configuring programmable logic and their interconnections. The use of FPGAs has grown almost exponentially because they dramatically reduce design turn-around time and start-up costs for electronic products, compared with Application-Specific Integrated Circuits (ASICs). A set of CAD tools is required to compile a hardware description into bitstream files that are used to configure a target FPGA to implement the desired circuit. Currently, the compile time, which is dominated by placement and routing times, can easily take hours or days to complete for large (8 million gate) FPGAs. With 40 million gate FPGAs on the horizon, these prohibitively long compile times may nullify the time-to -market advantage of FPGAs. This talk presents two novel placement heuristics that significantly reduce the amount of time required to achieve high quality placements, compared with a state-of-the-art tool, VPR. The first algorithm is an enhancement of simulated annealing that converges very fast. The second algorithm is based on clustering to reduce the complexity of solution space, followed by de-clustering during which the placement is fine-tuned to produce high quality solutions.
Design Practices for High-end Embedded Systems
Abstract
With the increase in the availability of the space in the VLSI circuits, the lot more hardware can be packed in a single chip. For the modern embedded systems, low power and high speed are two major performance goals. The current system design puts a high challenge on the system design methodology as well as tools.
In this talk we explore the flow of the embedded system design mainly with the focus on the system-on-chip design techniques. We then also describe the challenges and techniques for the tool developer as well as the users.
Minimum Dynamic Power CMOS Circuits
Abstract
The dynamic power consumption of a CMOS circuit is at least an order of magnitude greater than the other components such as the short-circuit and steady-state power. We prove that the minimum dynamic power design, which is completely free from glitches, is obtained when the differential path delay at every gate is less than the inertial delay of the gate. When the overall input to output delay is also constrained, we obtain the minimum power design by a linear program (LP) consisting of an inequality set whose size is linear in the circuit size. The LP determines the delays for all gates. For physical implementation, we express the delays of CMOS gates as linear functions of the length/width ratios of transistors. Routing delays are entered in the LP as constants and their values are determined through iterations of the LP and physical layout. The optimized design of the benchmark c7552 consumes 38% average power as compared to the original unoptimized circuit.
More topics will be given in due course of time..... Wish you all the Best!!!
Shajeemohan B.S.
Assistant Professor
Department of Electronics and Communication Engineering,
Government College of Engineering, Kannur, Kerala.
670652
Email:shajeemohan@yahoo.com
shajeemohan@rediffmail.com
Phone : +91-0497-2780226