الاثنين، 7 نوفمبر 2011

Wireless Home Networking For Dummies



Wireless Home Networking For Dummies

Wireless Home Networking For Dummies by Danny Briere , Pat Hurley , Edward Ferris
W i l e y | English | 2010 | ISBN: 0470877251 | 384 pages | PDF | 4,4 MB



Share stuff safely and wirelessly on Windows PCs or Mac OS X machines
Why go wireless? It's easy, convenient, inexpensive, and, with the emergence of new industry standards, better than ever! These experts know what you should look for (and look out for). They'll walk you through the pros and cons of the different standards, planning and installing your network, setting up security, and getting the most from your investment.



Discover how to: 
• Choose the right networking equipment 
• Integrate Bluetooth into your network 
• Work with servers, gateways, routers, and switches 
• Protect your network from intruders 
• Understand 802.11n


http://depositfiles.com/files/3jcinexhx

Data Mining: Concepts, Models, Methods, and Algorithms



Data Mining: Concepts, Models, Methods, and Algorithms

Data Mining: Concepts, Models, Methods, and Algorithms By Mehmed Kantardzic
Publisher: Wiley-IEEE Press; 2 edition 2011 | 552 Pages | ISBN: 0470890452 | EPUB | 9 MB


Now updated—the systematic introductory guide to modern analysis of large data setsAs data sets continue to grow in size and complexity, there has been an inevitable move towards indirect, automatic, and intelligent data analysis in which the analyst works via more complex and sophisticated software tools. This book reviews state-of-the-art methodologies and techniques for analyzing enormous quantities of raw data in high-dimensional data spaces to extract new information for decision-making.

This Second Edition of Data Mining: Concepts, Models, Methods, and Algorithms discusses data mining principles and then describes representative state-of-the-art methods and algorithms originating from different disciplines such as statistics, machine learning, neural networks, fuzzy logic, and evolutionary computation. Detailed algorithms are provided with necessary explanations and illustrative examples, and questions and exercises for practice at the end of each chapter. This new edition features the following new techniques/methodologies:

• Support Vector Machines (SVM)—developed based on statistical learning theory, they have a large potential for applications in predictive data mining
• Kohonen Maps (Self-Organizing Maps – SOM)—one of very applicative neural-networks-based methodologies for descriptive data mining and multi-dimensional data visualizations
• DBSCAN, BIRCH, and distributed DBSCAN clustering algorithms—representatives of an important class of density-based clustering methodologies
• Bayesian Networks (BN) methodology often used for causality modeling
• Algorithms for measuring Betweeness and Centrality parameters in graphs, important for applications in mining large social networks
• CART algorithm and Gini index in building decision trees
• Bagging & Boosting approaches to ensemble-learning methodologies, with details of AdaBoost algorithm
• Relief algorithm, one of the core feature selection algorithms inspired by instance-based learning
• PageRank algorithm for mining and authority ranking of web pages
• Latent Semantic Analysis (LSA) for text mining and measuring semantic similarities between text-based documents
• New sections on temporal, spatial, web, text, parallel, and distributed data mining
• More emphasis on business, privacy, security, and legal aspects of data mining technology
This text offers guidance on how and when to use a particular software tool (with the companion data sets) from among the hundreds offered when faced with a data set to mine. This allows analysts to create and perform their own data mining experiments using their knowledge of the methodologies and techniques provided. The book emphasizes the selection of appropriate methodologies and data analysis software, as well as parameter tuning. These critically important, qualitative decisions can only be made with the deeper understanding of parameter meaning and its role in the technique that is offered here.

This volume is primarily intended as a data-mining textbook for computer science, computer engineering, and computer information systems majors at the graduate level. Senior students at the undergraduate level and with the appropriate background can also successfully comprehend all topics presented here.

http://depositfiles.com/files/whn038uuz

Active networking




Active networking is a communication pattern that allows packets flowing through a telecommunications network to 

dynamically modify the operation of the network.

How it works

Active network architecture is composed of execution environments (similar to a unix shell that can execute active packets), a node operating system capable of supporting one or more execution environments. It also consists of active hardware, capable of routing or switching as well as executing code within active packets. This differs from the traditional network architecture which seeks robustness and stability by attempting to remove complexity and the ability to change its fundamental operation from underlying network components. Network processors are one means of implementing active networking concepts. Active networks have also been implemented as overlay networks.

What does it offer?

Active networking allows the possibility of highly tailored and rapid "real-time" changes to the underlying network operation. This enables such ideas as sending code along with packets of information allowing the data to change its form (code) to match the channel characteristics. The smallest program that can generate a sequence of data can be found in the definition of Kolmogorov complexity. The use of real-time genetic algorithms within the network to compose network services is also enabled by active networking.

Fundamental challenges

Active network research addresses the nature of how best to incorporate extremely dynamic capability within networks[1].
In order to do this, active network research must address the problem of optimally allocating computation versus communication within communication networks[2]. A similar problem related to the compression of code as a measure of complexity is addressed via algorithmic information theory.

Wireless Andrew




Wireless Andrew was the first campus-wide wireless Internet network. Built in 1993[1], it was located on the campus of Carnegie Mellon University at its Pittsburgh campus before Wi-Fi branding originated.[2] [3]
Wireless Andrew is a 2-megabit-per-second wireless local area network connected through access points to the wired Andrew network, a high-speed Ethernet backbone linking buildings across the Carnegie Mellon campus. Wireless Andrew consists of 100 access points covering six buildings on the Carnegie Mellon campus. The university tested the current setup with over 40 mobile units before allowing general use by researchers and students in February 1997. [4]

Technological evolution



Technological evolution is the name of a science and technology studies theory describing technology 

development, developed by Czech philosopher Radovan Richta

Theory of technological evolution

According to Richta and later Bloomfield,[1][2] technology (which Richta defines as "a material entity created by the application of mental and physical effort to nature in order to achieve some value") evolves in three stages: tools, machine, automation. This evolution, he says, follows two trends: the replacement of physical labour with more efficient mental labour, and the resulting greater degree of control over one's natural environment, including an ability to transform raw materials into ever more complex and pliable products.

[edit]
Stages of technological development

The pretechnological period, in which all other animal species remain today aside from some avian and primate species was a non-rational period of the early prehistoric man.
The emergence of technology, made possible by the development of the rational faculty, paved the way for the first stage: the tool. A tool provides a mechanical advantage in accomplishing a physical task, and must be powered by human or animal effort.
Hunter-gatherers developed tools mainly for procuring food. Tools such as a container, spear, arrow, plow, or hammer that augments physical labor to more efficiently achieve his objective. Later animal-powered tools such as the plow and the horse, increased the productivity of food production about tenfold over the technology of the hunter-gatherers. Tools allow one to do things impossible to accomplish with one's body alone, such as seeing minute visual detail with a microscope, manipulating heavy objects with a pulley and cart, or carrying volumes of water in a bucket.
The second technological stage was the creation of the machine. A machine (a powered machine to be more precise) is a tool that substitutes the element of human physical effort, and requires the operator only to control its function. Machines became widespread with the industrial revolution, though windmills, a type of machine, are much older.
Examples of this include cars, trains, computers, and lights. Machines allow humans to tremendously exceed the limitations of their bodies. Putting a machine on the farm, a tractor, increased food productivity at least tenfold over the technology of the plow and the horse.
The third, and final stage of technological evolution is the automation. The automation is a machine that removes the element of human control with an automatic algorithm. Examples of machines that exhibit this characteristic are digital watches, automatic telephone switches, pacemakers, and computer programs.
It's important to understand that the three stages outline the introduction of the fundamental types of technology, and so all three continue to be widely used today. A spear, a plow, a pen, and an optical microscope are all examples of tools.

[edit]
Theoretical implications

The process of technological evolution culminates with the ability to achieve all the material values technologically possible and desirable by mental effort.
An economic implication of the above idea is that intellectual labour will become increasingly more important relative to physical labour. Contracts and agreements around information will become increasingly more common at the marketplace. Expansion and creation of new kinds of institutes that works with information such as for example universities, book stores, patent-trading companies, etc. is considered an indication that a civilization is in technological evolution.
Interestingly, this highlights the importance underlining the debate over intellectual property in conjunction with decentralized distribution systems such as today's internet. Where the price of information distribution is going towards zero with ever more efficient tools to distribute information is being invented. Growing amounts of information being distributed to an increasingly larger customer base as times goes by. With growing disintermediation in said markets and growing concerns over the protection of intellectual property rights it is not clear what form markets for information will take with the evolution of the information age..

Share

Twitter Delicious Facebook Digg Stumbleupon Favorites More