Friday, April 3, 2009

Green Technology


Green Technology

Green technology (abbreviated as greentech) or clean technology (abbreviated as cleantech) is the application of the environmental science to conserve the natural environment and resources, and to curb the negative impacts of human involvement. Sustainable development is the core of environmental technologies. When applying sustainable development as a solution for environmental issues, the solutions need to be socially equitable, economically viable, and environmentally sound.


Related technologies
Some environmental technologies that retain sustainable development are; recycling, water purification, sewage treatment, environmental remediation, flue gas treatment, solid waste management, and renewable energy. Some technologies assist directly with energy conservation, while other technologies are emerging that help the environment by reducing the amount of waste produced by human activities. Energy sources such as solar power create less problems for the environment than traditional sources of energy like coal and petroleum.
Scientists continue to search for clean energy alternatives to our current power production methods. Some technologies such as anaerobic digestion produce renewable energy from waste materials. The global reduction of greenhouse gases is dependent on the adoption of energy conservation technologies at industrial level as well as this clean energy generation. That includes using unleaded gasoline, solar energy and alternative fuel vehicles, including plug-in hybrid and hybrid electric vehicles.
Since electric motors consume 60% of all electricity generated[citation needed], advanced energy efficient electric motor (and electric generator) technology that are cost effective to encourage their application, such as the brushless wound-rotor doubly-fed electric machine and energy saving module, can reduce the amount of carbon dioxide (CO2) and sulfur dioxide (SO2) that would otherwise be introduced to the atmosphere, if electricity is generated using fossil fuels. Greasestock is an event held yearly in Yorktown Heights, New York which is one of the largest showcases of environmental technology in the United States

Criticism
Some groups, including green anarchists, have criticised the concept of environmental technology. From their viewpoint, technology is seen as a system rather than a specific physical tool. Technology, it is argued, requires the exploitation of the environment through the creation and extraction of resources, and the exploitation of people through labour, specialisation and the division of labor. There is no “neutral” form of technology, as things are always created in a certain context with certain aims and functions. Thus, green technology is rejected as an attempt to reform this exploitative system, merely changing it on the surface to make it seem environmentally friendly, despite continued unsustainable levels of human and natural exploitation.

Examples

Renewable resource (non-fossil fuel energy sources)
Hydropower
Geothermal power
Solar power
Solar cell
Solar heating
Wind power
Green technology for wireless communications
Energy efficient radio access networks, NEC
Energy efficiency enhancement in radio access networks, Ericsson
Cliffside and Atom projects, Intel
WiFi with the Bluetooth function, Ozmo

Cloud Computing

Cloud computing

Jump to: navigation, search

Move protected

Cloud computing overview

Cloud computing is a style of computing in which dynamically scalable and often virtualised resources are provided as a service over the Internet.[1][2][3][4] Users need not have knowledge of, expertise in, or control over the technology infrastructure "in the cloud" that supports them.

The concept incorporates infrastructure as a service (IaaS), platform as a service (PaaS) and software as a service (SaaS) as well as Web 2.0 and other recent (ca. 2007–2009) technology trends that have the common theme of reliance on the Internet for satisfying the computing needs of the users. Examples of SaaS vendors include Salesforce.com and Google Apps which provide common business applications online that are accessed from a web browser, while the software and data are stored on the servers.

The term cloud is used as a metaphor for the Internet, based on how the Internet is depicted in computer network diagrams, and is an abstraction for the complex infrastructure it conceals.


Comparisons

Cloud computing is often confused with grid computing ("a form of distributed computing whereby a 'super and virtual computer' is composed of a cluster of networked, loosely-coupled computers, acting in concert to perform very large tasks"), utility computing (the "packaging of computing resources, such as computation and storage, as a metered service similar to a traditional public utility such as electricity") and autonomic computing ("computer systems capable of self-management").

Indeed many cloud computing deployments as of 2009 depend on grids, have autonomic characteristics and bill like utilities — but cloud computing can be seen as a natural next step from the grid-utility model. Some successful cloud architectures have little or no centralized infrastructure or billing systems whatsoever, including peer-to-peer networks like BitTorrent and Skype and volunteer computing like SETI@home.[citation needed]

Architecture

The majority of cloud computing infrastructure as of 2009[update] consists of reliable services delivered through data centers and built on servers with different levels of virtualization technologies. The services are accessible anywhere that has access to networking infrastructure. The Cloud appears as a single point of access for all the computing needs of consumers. Commercial offerings need to meet the quality of service requirements of customers and typically offer service level agreements.[12] Open standards are critical to the growth of cloud computing and open source software has provided the foundation for many cloud computing implementations.[13]

History

The Cloud is a term that borrows from telephony. Up until the '90s data circuits (including those that carried Internet traffic) were hard-wired between destinations. In the '90s long haul telephone companies began offering Virtual Private Network service for data communications. The telephone companies were able to offer these VPN based services with the same guaranteed bandwidth as fixed circuits at a lower cost because they maintained the ability to switch traffic to balance utilization as they saw fit, thus utilizing their overall network bandwidth more effectively. As a result of this arrangement it was thus impossible to determine in advance precisely what the path was going to be. The term "telecom cloud" was used to describe this type of networking. Cloud Computing is very similar. Cloud computing relies heavily on virtual machines (VMs) that are spawned on demand to meet the user's needs. Because these virtual instances are spawned on demand, it is impossible to determine how many such VMs are going to be running at any given time. As these VMs can be spawned on any given computer as conditions demand, they are location in-specific as well, much like a cloud network. A common depiction in network diagrams is a cloud outline.



Political issues



The Cloud spans many borders and "may be the ultimate form of globalization." As such it becomes subject to complex geopolitical issues: providers must satisfy myriad regulatory environments in order to deliver service to a global market. This dates back to the early days of the Internet, where libertarian thinkers felt that "cyberspace was a distinct place calling for laws and legal institutions of its own"; author Neal Stephenson envisaged this as a tiny island data haven called Kinakuta in his classic science-fiction novel Cryptonomicon.



Key characteristics

  • Cost is greatly reduced and capital expenditure is converted to operational expenditure[33]. This lowers barriers to entry, as infrastructure is typically provided by a third-party and does not need to be purchased for one-time or infrequent intensive computing tasks. Pricing on a utility computing basis is fine-grained with usage-based options and minimal or no IT skills are required for implementation.[34]

  • Device and location independence[35] enable users to access systems using a web browser regardless of their location or what device they are using, e.g., PC, mobile. As infrastructure is off-site (typically provided by a third-party) and accessed via the Internet the users can connect from anywhere.[34]

  • Multi-tenancy enables sharing of resources and costs among a large pool of users, allowing for:

    • Centralization of infrastructure in areas with lower costs (such as real estate, electricity, etc.)

    • Peak-load capacity increases (users need not engineer for highest possible load-levels)

    • Utilisation and efficiency improvements for systems that are often only 10-20% utilised.[23]

  • Reliability improves through the use of multiple redundant sites, which makes it suitable for business continuity and disaster recovery.[36] Nonetheless, most major cloud computing services have suffered outages and IT and business managers are able to do little when they are affected.[37][38]

  • Scalability via dynamic ("on-demand") provisioning of resources on a fine-grained, self-service basis near real-time, without users having to engineer for peak loads. Performance is monitored and consistent and loosely-coupled architectures are constructed using web services as the system interface.[34]

  • Security typically improves due to centralization of data, increased security-focused resources, etc., but raises concerns about loss of control over certain sensitive data. Security is often as good as or better than traditional systems, in part because providers are able to devote resources to solving security issues that many customers cannot afford. Providers typically log accesses, but accessing the audit logs themselves can be difficult or impossible.

  • Sustainability comes about through improved resource utilisation, more efficient systems, and carbon neutrality. Nonetheless, computers and associated infrastructure are major consumers of energy.

Expert Systems

EXPERT SYSTEMS

One of the largest area of applications of artificial intelligence is in expert systems, or knowledge based systems as they are often known. This type of system seeks to exploit the specialised skills or information held by of a group of people on specific areas. It can be thought of as a computerised consulting service. It can also be called an information guidance system. Such systems are used for prospecting medical diagnosis or as educational aids. They are also used in engineering and manufacture in the control of robots where they inter-relate with vision systems. The initial attempts to apply artificial intelligence to generalised problems made limited progress as we have seen but it was soon realised that more significant progress could be made if the field of interest was restricted.

STRUCTURE

The internal structure of an expert system can be considered to consist of three parts:

the knowledge base ; the database; the rule interpreter.

This is analagous to the production system where we have

the set of productions; the set of facts held as working memory and a rule interpreter.


The knowledge base holds the set of rules of inference that are used in reasoning. Most of these systems use IF-THEN rules to represent knowledge. Typically systems can have from a few hundred to a few thousand rules.

The database gives the context of the problem domain and is generally considered to be a set of useful facts. These are the facts that satisfy the condition part of the condition action rules as the IF THEN rules can be thought of.

The rule interpreter is often known as an inference engine and controls the knowledge base using the set of facts to produce even more facts. Communication with the system is ideally provided by a natural language interface. This enables a user to interact independently of the expert with the intelligent system.

OPERATION OF THE SYSTEM

Again there are three modes to this:

the knowledge acquisition mode;

the consultation mode;

and the explanation mode.

We shall consider each in turn.

KNOWLEDGE ACQUISITION

The system must liaise with people in order to gain knowledge and the people must be specialized in the appropriate area of activity. For example medical doctors, geologists or chemists. The knowledge engineer acts as an intermediary between the specialist and the expert system.

Typical of the information that must be gleaned is vocabulary or jargon, general concepts and facts, problems that commonly arise, the solutions to the problems that occur and skills for solving particular problems. This process of picking the brain of an expert is a specialised form of data capture and makes use of interview techniques. The knowledge engineer is also responsible for the self consistency of the data loaded. Thus a number of specific tests have to be performed to ensure that the conclusions reached are sensible.

Artificial Intelligence


Artificial intelligence (AI) is the intelligence of machines and is defined it as "the science and engineering of making intelligent machines."

Philosophy of AI

Artificial intelligence, by claiming to be able to recreate the capabilities of the human mind, is both a challenge and an inspiration for philosophy. Are there limits to how intelligent machines can be? Is there an essential difference between human intelligence and artificial intelligence? Can a machine have a mind and consciousness?


AI research

In the 21st century, AI research has become highly specialized and technical. It is deeply divided into subfields that often fail to communicate with each other.[10] Subfields have grown up around particular institutions, the work of particular researchers, particular problems (listed below), long standing differences of opinion about how AI should be done (listed as "approaches" below) and the application of widely differing tools

Deduction, reasoning, problem solving

Early AI researchers developed algorithms that imitated the step-by-step reasoning that human beings use when they solve puzzles, play board games or make logical deductions.[53] By the late 80s and 90s, AI research had also developed highly successful methods for dealing with uncertain or incomplete information, employing concepts from probability and economics.[54]

For difficult problems, most of these algorithms can require enormous computational resources — most experience a "combinatorial explosion": the amount of memory or computer time required becomes astronomical when the problem goes beyond a certain size. The search for more efficient problem solving algorithms is a high priority for AI research.[55]

Human beings solve most of their problems using fast, intuitive judgments rather than the conscious, step-by-step deduction that early AI research was able to model.[56] AI has made some progress at imitating this kind of "sub-symbolic" problem solving: embodied approaches emphasize the importance of sensorimotor skills to higher reasoning; neural net research attempts to simulate the structures inside human and animal brains that gives rise to this skill.

Approaches to AI

There is no established unifying theory or paradigm that guides AI research. Researchers disagree about many issues. A few of the most long standing questions that have remained unanswered are these: Can intelligence be reproduced using high-level symbols, similar to words and ideas? Or does it require "sub-symbolic" processing?[90] Should artificial intelligence simulate natural intelligence, by studying human psychology or animal neurobiology? Or is human biology as irrelevant to AI research as bird biology is to aeronautical engineering?[91] Can intelligent behavior be described using simple, elegant principles (such as logic or optimization)? Or does artificial intelligence necessarily require solving many unrelated problems?[92]


Artificial intelligence can also be evaluated on specific problems such as small problems in chemistry, hand-writing recognition and game-playing. Such tests have been termed subject matter expert Turing tests. Smaller problems provide more achievable goals and there are an ever-increasing number of positive results.

The broad classes of outcome for an AI test are:

  • optimal: it is not possible to perform better

  • strong super-human: performs better than all humans

  • super-human: performs better than most humans

  • sub-human: performs worse than most humans


AAAI

Founded in 1980, the American Association for Artificial Intelligence has expanded its service to the AI community far beyond the National Conference. Today, AAAI offers members and AI scientists a host of services and benefits:


The National Conference on Artificial Intelligence promotes research in AI and scientific interchange among AI researchers, practitioners, and scientists and engineers in related disciplines.

(www.aaai.org/Conferences/National/)


The Conference on Innovative Applications of Artificial Intelligence highlights successful applications of AI technology; explores issues, methods, and lessons learned in the development

and deployment of AI applications; and promotes an interchange of ideas between basic and applied AI. (www.aaai.org/Conferences/IAAI/)


The Artificial Intelligence and Interactive Digital Entertainment Conference is intended to be the definitive point of interaction between entertainment software developers interested in AI and academic and industrial researchers. (www.aaai.org/Conferences/AIIDE/)


AAAI’s Spring and Fall Symposia ((www.aaai.org/Symposia/) and Workshops (www.aaai.org/Workshops/) programs affords participants a smaller, more intimate setting where they can share ideas and learn from each other's AI research


AAAI’s Digital Library (www.aaai.org/Library), (www.aaai.org/Resources) and Online Services include a host of resources for the AI professional (including more than 12,000 papers), individuals

with only a general interest in the field (www.aaai.org/AITopics), as well as the professional press (www.aaai.org/ Pressroom).


AAAI Press, in conjunction with The MIT Press, publishes selected books on all aspects of AI (www.aaai.org/Press).


The AI Topics web site gives students and professionals alike links to many online resources on AI (www.aaai.org/AITopics).


AAAI Scholarships benefit students and foster new programs, meetings, and other AI programs. AAAI also recognizes those who have made significant contributions to the science of AI and AAAI through an extensive awards program (www.aaai.org/Awards).


AI Magazine, called the “journal of record for artificial intelligence,” has been published internationally for 25 years (www.aaai.org/Magazine).


AAAI’s Sponsored Journals program (www.aaai.org/Publications/Journals/) gives AAAI members discounts on many of the top AI journals.

www.aaai.org