Efficient storage mechanisms for building better supercapacitors
Host: Materials Department & Materials Research Laboratory
Host: Materials Department & Materials Research Laboratory
The Institute for Energy Efficiency (IEE) Apprentice Researchers Program is hosted at the University of California, Santa Barbara, by the Center for Science and Engineering Partnerships (CSEP) at the California Nanosystems Institute (CNSI).
Research in electrochemical energy storage is converging to target systems with battery-level energy density, and capacitor-level cycling stability and power density. One approach is to utilize redox-active electrolytes that add faradaic charge storage to increase energy density of supercapacitors. Aqueous redox-active electrolytes are simple to prepare and to up-scale; and, can be synergistically optimized to fully utilize the dynamic charge/discharge and storage properties of activated-carbon based electrode systems.
The UC Santa Barbara campus operates as a city, including the infrastructure, utility distribution networks and building systems required to provide a world-class teaching and research setting. A number of demand-side management initiatives being undertaken on campus ensure that energy demand and utility expenditures are continually reduced as UCSB climbs the national and international rankings.
A wide variety of electronic and optoelectronic devices such as transistors, LEDs, lasers, and solar cells use epitaxially grown thin films of semiconductor alloys. This imposes a constraint of lattice-constant matching between substrate and film, if crystal defects such as dislocations are to be avoided. This has traditionally limited the use to only a handful of alloys in composition space. In this talk, I will discuss how one can access new alloy compositions for a range of optoelectronic applications while keeping dislocation densities low.
Perhaps it was the image of the dead seabird that had unwittingly ingested shards of plastic. Or the footage of the turtle precariously tangled in plastic netting. Whatever the catalyst, plastic pollution has rapidly evolved from a largely abstract problem to a clearly relatable horror. And with that, public opinion has shifted from apathetic to appalled, inspiring numerous calls to action.
Modern computing systems are plagued with significant issues in efficiently performing learning tasks. In this talk, I will present a new brain-inspired computing architecture. It supports a wide range of learning tasks while offering higher system efficiency than the other existing platforms. I will first focus on HyperDimensional (HD) computing, an alternative method of computation which exploits key principles of brain functionality: (i) robustness to noise/error and (ii) intertwined memory and logic. To this end, we design a new learning algorithm resilient to hardware failure.
Photonics can reduce energy consumption in information processing and communications while simultaneously increasing the interconnect bandwidth density. The energy consumption in data centers is shifting from logic operations to interconnect energies. Without the prospect of substantial reduction in energy per bit communicated, the exponential growth of our use of information is limited. The use of optical interconnects fundamentally addresses both interconnect energy and bandwidth density, and is the only scalable solution to this problem.
Light-matter interaction is one of the fundamental phenomena of the universe that has greatly impacted the development of the human society, including the evolution of our visual systems and visually guided behavior. In this talk, we present research on light-matter interactions at nanoscale, also known as “nanophotonics”, to help brighten the future of energy sustainability. The applications include dispatchable solar electricity, ultralow-power photonic data links, and color-contrast manipulation of single atomic/molecular layers towards energy-efficient display in the future.
This presentation focuses on two recent contributions on model compression and acceleration of deep neural networks (DNNs). The first is a systematic, unified DNN model compression framework based on the powerful optimization tool ADMM (Alternating Direction Methods of Multipliers), which applies to non-structured and various types of structured weight pruning as well as weight quantization technique of DNNs. It achieves unprecedented model compression rates on representative DNNs, consistently outperforming competing methods.