Wednesday, November 26, 2014

Emerging database technologies

                   


INTRODUCTION:

 The term database refers to the collection of related records, and the software should be referred to as the database management system or DBMS. Database management systems are usually categorized according to the data model that they support: relational, object-relational, network, and so on. The data model will tend to determine the query languages that are available to access the database. The world of database boasts many kinds of technologies, which cater to the need of many kinds of organizations. Since 1970 different models and methods have been developed to describe, analyses and design computer based files and databases. The existing relational DBMS technology has been successfully applied to many application domains. RDBMS technology has proved to be an effective solution for data management requirements in large and small organizations, and today this technology forms a key component of most information systems. However, Applications in domains such as Multimedia, Geographical Information Systems, digital libraries, mobile database etc. demand a completely different set of requirements in terms of the underlying database models. The conventional relational database model is no longer appropriate for these types of data. Furthermore the volume of data is significantly larger than in classical database systems. Finally, indexing, retrieving and analysing these data types require specialized functionality that are not available in conventional database systems. This paper will cover some requirements of these emerging databases such as multimedia database, spatial database, temporal database, biological/genome database, mobile database, big data, their underlying technologies, data models and languages. These trends have resulted into the development of new database technologies to handle new data types and applications.

Some emerging database technologies are:


1-MULTIMEDIA DATABASE:

 Multimedia computing has emerged as a major area of research and has started dominating all facets of lives of mankind. A multimedia database is a database that hosts one or more primary media file types such as video, audio, radar signals and documents or pictures in various encoding. These forms have in common that they are much larger than the earlier forms of data integers, character strings of fixed length and vastly varying size. These are fall into three main categories:
 Static media (time-independent, i.e. images and handwriting)
 Dynamic media (time-dependent, i.e. video and sound bites)
 Dimensional media (i.e. 3D games or computer-aided drafting programs- CAD)

All primary media files are stored in binary strings of zeroes and ones, and are encoded according to file type. The term "data" is typically referenced from the computer point of view, whereas the term "multimedia" is referenced from the user point of view. There are numerous different types of multimedia databases, including:
 The Authentication Multimedia Database is a 1:1 data comparison ratio.
 The Identification Multimedia Database is a data comparison of one-to-many

A newly-emerging type of multimedia database, is the Biometrics Multimedia Database, which specializes in automatic human verification based on the algorithms of their behavioral or physiological profile. This method of identification is superior to traditional multimedia database methods requiring the typical input of personal identification numbers and passwords. Due to the fact that the person being identified does not need to be physically present, where the identification check is taking place. This removes the need for the person being scanned to remember a PIN or password. Fingerprint identification technology is also based on this type of multimedia database. The historic relational databases (i.e. the Binary Large Objects - BLOBs- developed for SQL databases to store multimedia data) do not conveniently support content-based searches for multimedia content. This is due to the relational database not being able to recognize the internal structure of a Binary Large Object and therefore internal multimedia data components cannot be retrieved.

2-TEMPORAL DATABASE:

Time is an important aspect of real world phenomena. Events occur at specific points in time. Objects and relationships among objects exist over time. The ability to model this temporal dimension of real world is essential to many computer applications such as econometrics, inventory control, airline reservations, medical records, accounting, law, banking, land and geographical information systems. In contrast, existing database technology provides little support for managing such data. A temporal database is formed by compiling and storing temporal data. The difference between temporal data and non-temporal data is that a time period is appended to data expressing when it was valid or stored in the database. The data stored by conventional databases consider data to be valid at present time as in the time instance ―now‖. When data in such a database is modified, removed or inserted, the state of the database is overwritten to form a new state. The state prior to any changes to the database is no longer available. Thus, by associate time with data, it is possible to store the different database states. In essence, temporal data is formed by time-stamping ordinary data (type of data we associate and store in conventional databases). In a relational data model, tuples are time-stamped and in an object-oriented data model, objects/attributes are time stamped. Each ordinary data has two time values attached to it, a start time and an end time to establish the time interval of the data. In a relational data model, relations are extended to have two additional attributes, one for start time and another for end time. Different Forms of Temporal Databases Time can be interpreted as valid time (when data occurred or is true in reality) or transaction time (when data was entered into the database).
 a historical database stores data with respect to valid time.
 a rollback database stores data with respect to transaction time.
 a bitemporal database stores data with respect to both valid and transaction time –

They store the history of data with respect to valid time and transaction time. A central goal of conventional relational database design is to produce a database schema consisting of a set of relational schemas. In normalization theory, normal forms constitute attempts at characterizing ―good‖ relation schemas, and a wide variety of normal forms has been proposed, the most prominent being third normal form and Boyce-Codd normal form. An extensive theory has been developed to provide a solid formal footing for relational database design, and most database textbooks expose their readers to the core of this theory. In temporal databases, there is an even greater need for database design guidelines. However, the conventional normalization concepts are not applicable to temporal relational data models because these models employ relational structures different from conventional relations. New temporal normal forms and underlying concepts that may serve as guidelines during temporal database design are needed. Temporal data models generally define time slice operators, which may be used to determine the snapshots contained in a temporal relation. Accepting a temporal relation as their argument and a time point as their parameter, these operators return the snapshot of the relation corresponding to the specified time point. Adopting a longer term and more abstract perspective, it is likely that new database management technologies and application areas will continue to emerge that provide ‗temporal ‘challenges. Due to the ubiquity of time and its importance to most database management applications, and because built-in temporal support generally offers many benefits and is challenging to provide, research in the temporal aspects of new database management technologies will continue to flourish for existing as well as new application areas.


3-MOBILE DATABASE
The rapid technological development of mobile phones (cell phones), wireless and satellite communications and increased mobility of individual users have resulted into increasing demand for mobile computing. Portable computing devices such as laptop computers, palmtop computers and so on coupled with wireless communications allow clients to access data from virtually anywhere and at any time in the globe. The mobile databases interfaced with these developments, offer the users such as CEOs, marketing professionals, finance managers and others to access any data, anywhere, at any time to take business decisions in real-time. Mobile databases are especially useful to geographically dispersed organisations.
The flourishing of the mobile devices is driving businesses to deliver data to employees and customers wherever they may be. The potential of mobile gear with mobile data is enormous. A salesperson equipped with a PDA running corporate databases can check order status, sales history and inventory instantly from the client’s site. And drivers can use handheld computers to log deliveries and report order changes for a more efficient supply chain

Recent advances in portable and wireless technology led to mobile computing, a new dimension in data communication and processing. Portable computing devices coupled with wireless communications allow clients to access data from virtually anywhere and at any time. Now days you can even connect to your Intranet from an aero plane. Mobile database are the database that allows the development and deployment of database applications for handheld devices, thus, enabling relational database based applications in the hands of mobile workers. The database technology allows employees using handheld to link to their corporate networks, download data, work offline, and then connect to the network again to synchronize with the corporate database. Mobile computing applications, residing fully or partially on mobile devices, typically use cellular networks to transmit information over wide areas, and wireless LANs over short distances. Some of the commercially available Common Mobile relational Database systems are IBM's DB2 Everywhere 1.0, Oracle Lite, Sybase's SQL etc.
These databases work on Palm top and hand held devices (Windows CE devices) providing a local data store for the relational data acquired from enterprise SQL databases. The main constraints for such databases are relating to the size of the program as the handheld devices have RAM oriented constraints. The commercially available mobile database systems allow wide variety of platforms and data sources. They also allows users with handheld to synchronise with Open Database Connectivity (ODBC) database content, and personal information management data and email from Lotus Development's Notes or Microsoft's Exchange. These database technologies support either query-by-example (QBE) or SQL statements. Mobile computing has proved useful in many applications. Many business travelers are using laptop computers to enable them to work and to access data while traveling. Delivery services may use/ are using mobile computers to assist in tracking of delivery of goods. Emergency response services may use/ are using mobile computers at the disasters sites, medical emergencies, etc. to access information and to provide data pertaining to the situation. Newer applications of mobile computers are also emerging.


4-GEOGRAPHIC INFORMATION SYSTEMS

GIS is a technological field that incorporates geographical features with tabular data in order to map, analyses, and assess real-world problems. The key word to this technology is Geography – this means that some portion of the data is spatial. In other words, data that is in some way referenced to locations on the earth. Coupled with this data is usually tabular data known as attribute data. Attribute data can be generally defined as additional information about each of the spatial features. Geographic information systems (GIS) are used to collect, model, and analyses information describing physical properties of the geographical world. The scope of GIS broadly encompasses two types of data:
 Spatial data, originating from maps, digital images, administrative and political boundaries, roads, transportation networks, physical data, such as rivers, soil characteristics, climatic regions, land elevations, and
 Non spatial data, such as socio-economic data (like census counts), economic data, and sales or marketing information. GIS is a rapidly developing domain that offers highly innovative approaches to meet some challenging technical demands.

GIS Applications can be divided into three categories
 Cartographic applications
 Digital terrain modelling applications
 geographic objects applications

Figure  shows GIS categories and grouping of different GIS application areas. GIS data can be broadly represented in two formats, Vector data and Raster data. Vector data represents geometric objects such as points, lines and polygons. Raster data is characterized as an array of points, where each point represents the value of an attribute for a real-world location. Informally, raster images are n-dimensional array where each entry is a unit of the image and represents an attribute. Two-dimensional units are called pixels, while three-dimensional units are called voxels. Three-dimensional elevation data is stored in a raster-based digital elevation model (DEM) format. Another raster format called triangular irregular network (TIN) is a topological vector-based approach that models surfaces by connecting sample points as vector of triangles and has a point density that may vary with the roughness of the terrain. Rectangular grids (or elevation matrices) are two-dimensional array structures.
                             
 5-GENOME DATA

The biological sciences encompass an enormous variety of information. Environmental science gives us a view of how species live and interact in a world filled with natural phenomena. Biology and ecology study particular species. Anatomy focuses on the overall structure of an organism, documenting the physical aspects of individual bodies. Traditional medicine and physiology break the organism into systems and tissues and strive to collect information on the workings of these systems and the organism as a whole. Histology and cell biology delve into the tissue and cellular levels and provide knowledge about the inner structure and function of the cell. This wealth of information that has been generated, classified, and stored for centuries has only recently become a major application of database technology. Genetics has emerged as an ideal field for the application of information technology. In a broad sense, it can be taught of as the construction of models based on information about genes – which can be defined as units of heredity – and population and the seeking out of relationships in that information. The study of genetics can be divided into three branches:
 Mendelian genetics. This is the study of the transmission of traits between generations.
 Molecular genetics. This is the study of the chemical structure and function of genes at the molecular level.
 Population genetics. This is the study of how genetic information varies across populations of organisms.

Biological data exhibits many special characteristics that make management of biological information a particularly challenging problem. The characteristics related to biological information, and focusing on a multidisciplinary field called bioinformatics that has emerged. Bioinformatics addresses information management of genetic information with special emphasis on DNA sequence analysis. Applications of bioinformatics span design of targets for drugs, study of mutations and related diseases, anthropological investigations on migration patterns of tribes and therapeutic treatments. The term genome is defined as the total genetic information that can be obtained about an entity. The human genome, for example, generally refers to the complete set of genes required to create a human being –estimated to be more than 30,000 genes spread over 23 pairs of chromosomes, with an estimated 3 to 4 billion nucleotides. The goal of the Human Genome Project (HGP) has been to obtain the complete sequence – the ordering of the bases – of those nucleotides.



6-DIGITAL LIBRARY

Digital libraries are an important and active research area. Conceptually, a digital library is an analog of a traditional library-a large collection of information sources in various media-coupled with the advantages of traditional technologies. However, digital libraries differ from their traditional counter-parts in significant ways: storage is digital, remote access is quick and easy, and materials are copied from a master version. Furthermore, keeping extra copies on hand is easy and is not hampered by budget and storage restrictions, which are major problems in traditional libraries. Thus, digital technologies overcome many of the physical and economic limitations of traditional libraries. The Digital Library Initiative (DLI), jointly focused by SNF, DARPA, and NASA, has been a major accelerator of the development of digital libraries. This initiative provided significant funding to six major projects at six universities in its first phase covering a broad spectrum of enabling technologies. The initiative‘s web page define its focus as ―dramatically advance the means to collect, store, and organize information in digital forms, and make it available for searching, retrieval, and processing via communication networks-all in user-friendly ways. The magnitude of these data collections as well as their diversity and multiple formats provides challenges on a new scale. The future progression of the development of digital libraries is likely to move from the present technology of retrieval via the internet, though net searches of indexed information in repositories, to a time of information correlation and analysis by intelligent networks. Techniques for collecting information, storing it, and organizing it to support informational requirements learned in decades of design and implementation of database will provide the baseline for development of approaches appropriate for digital libraries.

7- BIG DATA

Now a days advancement of technology generate large, diverse, longitudinal, complex, and/or distributed data sets mainly from instruments, sensors, Internet transactions, email, video, click streams, and/or all other digital sources. Individuals with smartphones and on social network sites and multimedia will continue to fuel exponential growth of data. The large pools of data that can be captured, communicated, aggregated, stored, and analysed is part of every sector and function of the global economy. This amount of data has been exploding. Companies capture trillions of bytes of information about their customers, suppliers, and operations, and millions of networked sensors are being embedded in the physical world in devices such as mobile phones and automobiles, sensing, creating, and communicating data. Multimedia and individuals with smartphones and on social network sites will continue to fuel exponential growth. Big data—large pools of data that can be captured, communicated, aggregated, stored, and analysed—is now part of every sector and function of the global economy. Like other essential factors of production
The three characteristics of big data: volume, velocity and variety

Such as hard assets and human capital, it is increasingly the case that much of modern economic activity, innovation, and growth simply couldn‘t take place without data. Big data represents a sea change in the technology we draw upon for making decisions. Organizations will integrate and analyse data from diverse sources, complementing enterprise databases with data from social media, video, smart mobile devices, and other sources. The evolution of information architectures to include big data will likely provide the foundation for a new generation of enterprise infrastructure. To exploit these diverse sources of data for decision-making, an organization must develop an effective strategy for acquiring, organizing, and analysing big data, using it to generate new insights about the business and make better decisions. The previously nebulous definition of ―big data‖ is growing more concrete as it becomes the focus of more applications. As seen in Figure 2 (below), volume, velocity and variety make up three key characteristics of big data:
 Volume. Rather than just capturing business transactions and moving samples and aggregates to another database for analysis, applications now capture all possible data for analysis.
 Velocity. Traditional transaction-processing applications might have captured transactions in real time from end users, but newer applications are increasingly capturing data streaming in from other systems or even sensors. Traditional applications also move their data to an enterprise data warehouse through a deliberate and careful process that generally focuses on historical analysis.
 Variety. The variety of data is much richer now, because data no longer comes solely from business transactions. It often comes from machines, sensors and unrefined sources, making it much more complex to manage.

8-NOSQL DATABASES

The term NoSQL has been around for just a few years and was invented to provide a descriptor for a variety of database technologies that emerged to cater for what is known as "Web-scale" or "Internet-scale" demands. In computing, NoSQL (commonly interpreted as "not only SQL") is a broad class of database management systems identified by non-adherence to the widely used relational database management system model. NoSQL databases are not built primarily on tables, and generally do not use SQL for data manipulation. NoSQL database systems are often highly optimized for retrieval and appending operations and often offer little functionality beyond record storage (e.g. key–value stores). The reduced run-time flexibility compared to full SQL systems is compensated by marked gains in scalability and performance for certain data models. In short, NoSQL database management systems are useful when working with a huge quantity of data when the data's nature does not require a relational model. The data can be structured, but NoSQL is used when what really matters is the ability to store and retrieve great quantities of data, not the relationships between the elements. Usage examples might be to store millions of key–value pairs in one or a few associative arrays or to store millions of data records. The fledgling NoSQL marketplace is going through a rapid transition – from the predominantly community-driven platform development to a more mature application-driven market. Scaling up web infrastructure on NoSQL basis have proven successful for Facebook, Digg and Twitter. Successful attempts have been made to develop NOSQL applications in the biotechnology, defence and image/signal processing. Interest in using key-value pair (KVP) technology has re-emerged to the point where the traditional RDMS vendors evaluate strategy of developing in-house NoSQL solutions and integrating them in current product offers. It will not take long before we‘ll see acquisitions driven by emerging NoSQL technology. The future deals will likely be made to better compete both in platform offering and in vertical market segments.

 CONCLUSIONS

Applications in domains such as Multimedia, Geographical Information Systems, digital libraries, and big data demand a completely different set of requirements in terms of the underlying database models which conventional relational database can no longer handle. The conventional relational database model is no longer appropriate for these types of data. Furthermore the volume of data is typically significantly larger than in classical database systems. Finally, indexing, retrieving and analyzing these data types require specialized functionality, which is not available in conventional database systems. Hence, a new direction, such as described above, in DBMS is necessary













Monday, October 27, 2014

Share Multiple Items

Share Multiple Items

Share Analog and Digital Communication

Share Analog and Digital Communication

AMPLIFIER CLASSES


Amplifier Classes Explained
Not all amplifiers are the same and there is a clear distinction made between the way   their output stages operate. The main operating characteristics of an ideal amplifier are linearity, signal gain, efficiency and power output but in real world amplifiers there is always a tradeoff between these different characteristics.
Generally, large signal or Power Amplifiers are used in the output stages of audio amplifier systems to drive a loudspeaker load. A typical loudspeaker has an impedance of between 4Ω and 8Ω, thus a power amplifier must be able to supply the high peak currents required to drive the low impedance speaker.
One method used to distinguish the electrical characteristics of different types of amplifiers is by “class”, and as such amplifiers are classified according to their circuit configuration and method of operation. Then Amplifier Classes is the term used to differentiate between the different amplifier types.
Amplifier Classes represent the amount of the output signal which varies within the amplifier circuit over one cycle of operation when excited by a sinusoidal input signal. The classification of amplifiers range from entirely linear operation (for use in high-fidelity signal amplification) with very low efficiency, to entirely non-linear (where a faithful signal reproduction is not so important) operation but with a much higher efficiency, while others are a compromise between the two.
Amplifier classes are mainly lumped into two basic groups. The first are the classically controlled conduction angle amplifiers forming the more common amplifier classes of A, B, AB and C, which are defined by the length of their conduction state over some portion of the output waveform, such that the output stage transistor operation lies somewhere between being “fully-ON” and “fully-OFF”.
The second set of amplifiers are the newer so-called “switching” amplifier classes of D, E, F, G, S, T etc, which use digital circuits and pulse width modulation (PWM) to constantly switch the signal between “fully-ON” and “fully-OFF” driving the output hard into the transistors saturation and cut-off regions.
The most commonly constructed amplifier classes are those that are used as audio amplifiers, mainly class A, B, AB and C and to keep things simple, it is these types of amplifier classes we will look at here in more detail.
Class A  amplifiers are the most common type of amplifier class due mainly to their simple design. Class A, literally means “the best class” of amplifier due mainly to their low signal distortion levels and are probably the best sounding of all the amplifier classes mentioned here. The class A amplifier has the highest linearity over the other amplifier classes and as such operates in the linear portion of the characteristics curve.
Generally class A amplifiers use the same single transistor (Bipolar, FET, IGBT, etc) connected in a common emitter configuration for both halves of the waveform with the transistor always having current flowing through it, even if it has no base signal. This means that the output stage whether using a Bipolar, MOSFET or IGBT device, is never driven fully into its cut-off or saturation regions but instead has a base biasing Q-point in the middle of its load line. Then the transistor never turns “OFF” which is one of its main disadvantages.


To achieve high linearity and gain, the output stage of a class A amplifier is biased “ON” (conducting) all the time. Then for an amplifier to be classified as “Class A” the zero signal idle current in the output stage must be equal to or greater than the maximum load current (usually a loudspeaker) required to produce the largest output signal.
As a class A amplifier operates in the linear portion of its characteristic curves, the single output device conducts through a full 360 degrees of the output waveform. Then the class A amplifier is equivalent to a current source.
Since a class A amplifier operates in the linear region, the transistors base (or gate) DC biasing voltage should by chosen properly to ensure correct operation and low distortion. However, as the output device is “ON” at all times, it is constantly carrying current, which represents a continuous loss of power in the amplifier.
Due to this continuous loss of power class A amplifiers create tremendous amounts of heat adding to their very low efficiency at around 30%, making them impractical for high-power amplifications. Also due to the high idling current of the amplifier, the power supply must be sized accordingly and be well filtered to avoid any amplifier hum and noise. Therefore, due to the low efficiency and over heating problems of Class A amplifiers, more efficient amplifier classes have been developed.
Class B Amplifier
Class B amplifiers were invented as a solution to the efficiency and heating problems associated with the previous class A amplifier. The basic class B amplifier uses two complimentary transistors either bipolar of FET for each half of the waveform with its output stage configured in a “push-pull” type arrangement, so that each transistor device amplifies only half of the output waveform.
In the class B amplifier, there is no DC base bias current as its quiescent current is zero, so that the dc power is small and therefore its efficiency is much higher than that of the class A amplifier. However, the price paid for the improvement in the efficiency is in the linearity of the switching device
When the input signal goes positive, the positive biased transistor conducts while the negative transistor is switched “OFF”. Likewise, when the input signal goes negative, the positive transistor switches “OFF” while the negative biased transistor turns “ON” and conducts the negative portion of the signal. Thus the transistor conducts only half of the time, either on positive or negative half cycle of the input signal.
Then we can see that each transistor device of the class B amplifier only conducts through one half or 180 degrees of the output waveform in strict time alternation, but as the output stage has devices for both halves of the signal waveform the two halves are combined together to produce the full linear output waveform.
This push-pull design of amplifier is obviously more efficient than Class A, at about 50%, but the problem with the class B amplifier design is that it can create distortion at the zero-crossing point of the waveform due to the transistors dead band of input base voltages from -0.7V to +0.7.
We remember from the Transistor tutorial that it takes a base-emitter voltage of about 0.7 volts to get a bipolar transistor to start conducting. Then in a class B amplifier, the output transistor is not “biased” to an “ON” state of operation until this voltage is exceeded.
This means that the  part of the waveform which falls within this 0.7 volt window will not be reproduced accurately making the class B amplifier unsuitable for precision audio amplifier applications.
To overcome this zero-crossing distortion (also known as Crossover Distortion) class AB amplifiers were developed.
Class AB Amplifier
As its name suggests, the Class AB Amplifier is a combination of the “Class A” and the “Class B” type amplifiers we have looked at above. The AB classification of amplifier is currently one of the most common used types of audio power amplifier design. The class AB amplifier is a variation of a class B amplifier as described above, except that both devices are allowed to conduct at the same time around the waveforms crossover point eliminating the crossover distortion problems of the previous class B amplifier.
The two transistors have a very small bias voltage, typically at 5 to 10% of the quiescent current to bias the transistors just above its cut-off point. Then the conducting device, either bipolar of FET, will be “ON” for more than one half cycle, but much less than one full cycle of the input signal. Therefore, in a class AB amplifier design each of the push-pull transistors is conducting for slightly more than the half cycle of conduction in class B, but much less than the full cycle of conduction of class A.
In other words, the conduction angle of a class AB amplifier is somewhere between 180o and 360odepending upon the chosen bias point as shown.

The advantage of this small bias voltage, provided by series diodes or resistors, is that the crossover distortion created by the class B amplifier characteristics is overcome, without the inefficiencies of the class A amplifier design. So the class AB amplifier is a good compromise between class A and class B in terms of efficiency and linearity, with conversion efficiencies reaching about 50% to 60%.
Class C Amplifier
The Class C Amplifier design has the greatest efficiency but the poorest linearity of the classes of amplifiers mentioned here. The previous classes, A, B and AB are considered linear amplifiers, as the output signals amplitude and phase are linearly related to the input signals amplitude and phase.
However, the class C amplifier is heavily biased so that the output current is zero for more than one half of an input sinusoidal signal cycle with the transistor idling at its cut-off point. In other words, the conduction angle for the transistor is significantly less than 180 degrees, and is generally around the 90 degrees area.
While this form of transistor biasing gives a much improved efficiency of around 80% to the amplifier, it introduces a very heavy distortion of the output signal. Therefore, class C amplifiers are not suitable for use as audio amplifiers.


Due to its heavy adudio distortion, class C amplifiers are commonly used in high frequency sine wave oscillators and certain types of radio frequency amplifiers, where the pulses of current produced at the amplifiers output can be converted to complete sine waves of a particular frequency by the use of LC resonant circuits in its collector circuit.
Amplifier Classes Summary
Then we have seen that the quiescent DC operating point (Q-point) of an amplifier determines the amplifier classification. By setting the position of the Q-point at half way on the load line of the amplifiers characteristics curve, the amplifier will operate as a class A amplifier. By moving the Q-pointlower down the load line changes the amplifier into a class AB, B or C amplifier.
Then the class of operation of the amplifier with regards to its DC operating point can be given as
Amplifier Classes and Efficiency


As well as audio amplifiers there are a number of high efficiency Amplifier Classes relating to switching amplifier designs that use different switching techniques to reduce power loss and increase efficiency. Some amplifier class designs listed below use RLC resonators or multiple power-supply voltages to reduce power loss, or are digital DSP (digital signal processing) type amplifiers which use pulse width modulation (PWM) switching techniques.
Other Amplifier Classes
·         Class D Amplifier – A Class D audio amplifier is basically a non-linear switching amplifier or PWM amplifier. Class-D amplifiers theoretically can reach 100% efficiency, as there is no period during a cycle were the voltage and current waveforms overlap as current is drawn only through the transistor that is on.
·         Class F Amplifier – Class-F amplifiers boost both efficiency and output by using harmonic resonators in the output network to shape the output waveform into a square wave. Class-F amplifiers are capable of high efficiencies of more than 90% if infinite harmonic tuning is used.
·         Class G Amplifier – Class G offers enhancements to the basic class AB amplifier design. Class G uses multiple power supply rails of various voltages and automatically switches between these supply rails as the input signal changes. This constant switching reduces the average power consumption, and therefore power loss caused by wasted heat.
·         Class I Amplifier – The class I amplifier has two sets of complementary output switching devices arranged in a parallel push-pull configuration with both sets of switching devices sampling the same input waveform. One device switches the positive half of the waveform, while the other switches the negative half similar to a class B amplifier. With no input signal applied, or when a signal reaches the zero crossing point, the switching devices are both turned ON and OFF simultaneously with a 50% PWM duty cycle cancelling out any high frequency signals.

To produce the positive half of the output signal, the output of the positive switching device is increased in duty cycle while the negative switching device is decreased by the same and vice versa. The two switching signal currents are said to be interleaved at the output, giving the class I amplifier the named of: “interleaved PWM amplifier” operating at switching frequencies in excess of 250kHz.
·         Class S Amplifier – A class S power amplifier is a non-linear switching mode amplifier similar in operation to the class D amplifier. The class S amplifier converts analogue input signals into digital square wave pulses by a delta-sigma modulator, and amplifies them to increases the output power before finally being demodulated by a band pass filter. As the digital signal of this switching amplifier is always either fully “ON” or “OFF” (theoretically zero power dissipation), efficiencies reaching 100% are possible.
·         Class T Amplifier – The class T amplifier is another type of digital switching amplifier design. Class T amplifiers are starting to become more popular these days as an audio amplifier design due to the existence of digital signal processing (DSP) chips and multi-channel surround sound amplifiers as it converts analogue signals into digital pulse width modulated (PWM) signals for amplification increasing the amplifiers efficiency. Class T amplifier designs combine both the low distortion signal levels of class AB amplifier and the power efficiency of a class D amplifier.
We have seen here a number of classification of amplifiers ranging from linear Power Amplifiers to non-linear switching amplifiers, and have seen how an amplifier class differs along the amplifiers load line. The class AB, B and C amplifiers can be defined in terms of the conduction angle, Î¸ as follows:
Amplifier Class by Conduction Angle
Amplifier Class
Description
Conduction Angle
Class-A
Full cycle 360o of Conduction
θ = Ï€
Class-B
Half cycle 180o of Conduction
θ = Ï€/2
Class-AB
Slightly more than 180o of conduction
Ï€/2 < Î¸ < Ï€
Class-C
Slightly less than 180o of conduction
θ < Ï€/2
Class-D to T
ON-OFF non-linear switching
θ = 0

Audio Amplifier Classifications
The following information was written in the late 1990's by Dennis A. Bohn and may be referanced on Ranesprofessional audio referance page in its entirety (assuming the link still works). The Rane site has a large amount of great details and information, I suggest those interested in audio visit and read up. The information for this page may have been updated since it was originally posted such a long time ago. 






amplifier classes Audio power amplifiers are classified according to the relationship between the output voltage swing and the input voltage swing, thus it is primarily the design of the output stage that defines each class.Classification is based on the amount of time the output devices operate during one complete cycle of signal swing.This is also defined in terms of output bias current [the amount of current flowing in the output devices with no applied signal]. For discussion purposes (with the exception of class A), assume a simple output stage consisting of two complementary devices (one positive polarity and one negative polarity) -- tubes (valves) or any type of transistor (bipolar, MOSFET, JFET, IGFET, IGBT, etc.). 

--class A operation is where both devices conduct continuously for the entire cycle of signal swing, or the bias current flows in the output devices at all times. The key ingredient of class A operation is that both devices are always on. There is no condition where one or the other is turned off. Because of this, class A amplifiers in reality are not complementary designs. They are single-ended designs with only one type polarity output devices. They may have "bottom side" transistors but these are operated as fixed current sources, not amplifying devices. Consequently class A is the most inefficient of all power amplifier designs, averaging only around 20% (meaning you draw about 5 times as much power from the source as you deliver to the load!) Thus class A amplifiers are large, heavy and run very hot. All this is due to the amplifier constantly operating at full power. The positive effect of all this is that class A designs are inherently the most linear, with the least amount of distortion. [Much mystique and confusion surrounds the term class A. Many mistakenly think it means circuitry comprised of discrete components (as opposed to integrated circuits). Such is not the case. A great many integrated circuits incorporate class A designs, while just as many discrete component circuits do not use class A designs.] 

--class B operation is the opposite of class A. Both output devices are never allowed to be on at the same time, or the bias is set so that current flow in a specific output device is zero when not stimulated with an input signal, i.e., the current in a specific output flows for one half cycle. Thus each output device is on for exactly one half of a complete sinusoidal signal cycle. Due to this operation, class B designs show high efficiency but poor linearity around the crossover region. This is due to the time it takes to turn one device off and the other device on, which translates into extreme crossover distortion. Thus restricting class B designs to power consumption critical applications, e.g., battery operated equipment, such as 2-way radio and other communications audio. 

--class AB operation is the intermediate case. Here both devices are allowed to be on at the same time (like in class A), but just barely. The output bias is set so that current flows in a specific output device appreciably more than a half cycle but less than the entire cycle. That is, only a small amount of current is allowed to flow through both devices, unlike the complete load current of class A designs, but enough to keep each device operating so they respond instantly to input voltage demands. Thus the inherent non-linearity of class B designs is eliminated, without the gross inefficiencies of the class A design. It is this combination of good efficiency (around 50%) with excellent linearity that makes class AB the most popular audio amplifier design. 

--class AB plus B design involves two pairs of output devices: one pair operates class AB while the other (slave) pair operates class B. 

--class C use is restricted to the broadcast industry for radio frequency (RF) transmission. Its operation is characterized by turning on one device at a time for less than one half cycle. In essence, each output device is pulsed-on for some percentage of the half cycle, instead of operating continuously for the entire half cycle. This makes for an extremely efficient design capable of enormous output power. It is the magic of RF tuned circuits (flywheel effect) that overcomes the distortion create d by class C pulsed operation. 

--class D operation is switching, hence the term switching power amplifier. Here the output devices are rapidly switched on and off at least twice for each Sampling Theorem. Theoretically since the output devices are either completely on or completely off they do not dissipate any power. If a device is on there is a large amount of current flowing through it, but all the voltage is across the load, so the power dissipated by the device is zero (found by multiplying the voltage across the device [zero] times the current flowing through the device [big], so 0 x big = 0); and when the device is off, the voltage is large, but the current is zero so you get the same answer. Consequently class D operation is theoretically 100% efficient, but this requires zero on-impedance switches with infinitely fast switching times -- a product we're still waiting for; meanwhile designs do exist with true efficiencies approaching 90%. 

--class E operation involves amplifiers designed for rectangular input pulses, not sinusoidal audio waveforms. The output load is a tuned circuit, with the output voltage resembling a damped single pulse. 

The following terms, while generally agreed upon, are not considered "official" classifications 

--class F [If the person from Motorola Communications Division (I believe) who wrote me with all the great input re broadcast amp classes, could write me again. I would appreciate it. I did all the suggested edits, then promptly threw away your suggestions, forgot to save the file, and lost them all! 
Write me (Dennisb@rane.com). Thanks!] 

--class G operation involves changing the power supply voltage from a lower level to a higher level when larger output swings are required. There have been several ways to do this. The simplest involves a single class AB output stage that is connected to two power supply rails by a diode, or a transistor switch. The design is such that for most musical program material, the output stage is connected to the lower supply voltage, and automatically switches to the higher rails for large signal peaks [thus the nickname rail-switcher]. Another approach uses two class AB output stages, each connected to a different power supply voltage, with the magnitude of the input signal determining the signal path. Using two power supplies improves efficiency enough to allow significantly more power for a given size and weight. Class G is becoming common for pro audio designs. [Historical note: Hitachi is credited with pioneering class G designs with their 1977 Dynaharmony HMA 8300 power amplifier.] 

--class H operation takes the class G design one step further and actually modulates the higher power supply voltage by the input signal. This allows the power supply to track the audio input and provide just enough voltage for optimum operation of the output devices [thus the nickname rail-tracker]. The efficiency of class H is comparable to class G designs. [Historical note: Sound craftsmen is credited with pioneering class H designs with their 1977 Vari-proportional MA5002 power amplifier.]


Sunday, June 22, 2014

Emerging database technologies

Emerging database technologies


INTRODUCTION:

 The term database refers to the collection of related records, and the software should be referred to as the database management system or DBMS. Database management systems are usually categorized according to the data model that they support: relational, object-relational, network, and so on. The data model will tend to determine the query languages that are available to access the database. The world of database boasts many kinds of technologies, which cater to the need of many kinds of organizations. Since 1970 different models and methods have been developed to describe, analyses and design computer based files and databases. The existing relational DBMS technology has been successfully applied to many application domains. RDBMS technology has proved to be an effective solution for data management requirements in large and small organizations, and today this technology forms a key component of most information systems. However, Applications in domains such as Multimedia, Geographical Information Systems, digital libraries, mobile database etc. demand a completely different set of requirements in terms of the underlying database models. The conventional relational database model is no longer appropriate for these types of data. Furthermore the volume of data is significantly larger than in classical database systems. Finally, indexing, retrieving and analysing these data types require specialized functionality that are not available in conventional database systems. This paper will cover some requirements of these emerging databases such as multimedia database, spatial database, temporal database, biological/genome database, mobile database, big data, their underlying technologies, data models and languages. These trends have resulted into the development of new database technologies to handle new data types and applications.

Some emerging database technologies are:


1-MULTIMEDIA DATABASE:

 Multimedia computing has emerged as a major area of research and has started dominating all facets of lives of mankind. A multimedia database is a database that hosts one or more primary media file types such as video, audio, radar signals and documents or pictures in various encoding. These forms have in common that they are much larger than the earlier forms of data integers, character strings of fixed length and vastly varying size. These are fall into three main categories:
 Static media (time-independent, i.e. images and handwriting)
 Dynamic media (time-dependent, i.e. video and sound bites)
 Dimensional media (i.e. 3D games or computer-aided drafting programs- CAD)

All primary media files are stored in binary strings of zeroes and ones, and are encoded according to file type. The term "data" is typically referenced from the computer point of view, whereas the term "multimedia" is referenced from the user point of view. There are numerous different types of multimedia databases, including:
 The Authentication Multimedia Database is a 1:1 data comparison ratio.
 The Identification Multimedia Database is a data comparison of one-to-many

A newly-emerging type of multimedia database, is the Biometrics Multimedia Database, which specializes in automatic human verification based on the algorithms of their behavioral or physiological profile. This method of identification is superior to traditional multimedia database methods requiring the typical input of personal identification numbers and passwords. Due to the fact that the person being identified does not need to be physically present, where the identification check is taking place. This removes the need for the person being scanned to remember a PIN or password. Fingerprint identification technology is also based on this type of multimedia database. The historic relational databases (i.e. the Binary Large Objects - BLOBs- developed for SQL databases to store multimedia data) do not conveniently support content-based searches for multimedia content. This is due to the relational database not being able to recognize the internal structure of a Binary Large Object and therefore internal multimedia data components cannot be retrieved.

2-TEMPORAL DATABASE:

Time is an important aspect of real world phenomena. Events occur at specific points in time. Objects and relationships among objects exist over time. The ability to model this temporal dimension of real world is essential to many computer applications such as econometrics, inventory control, airline reservations, medical records, accounting, law, banking, land and geographical information systems. In contrast, existing database technology provides little support for managing such data. A temporal database is formed by compiling and storing temporal data. The difference between temporal data and non-temporal data is that a time period is appended to data expressing when it was valid or stored in the database. The data stored by conventional databases consider data to be valid at present time as in the time instance ―now‖. When data in such a database is modified, removed or inserted, the state of the database is overwritten to form a new state. The state prior to any changes to the database is no longer available. Thus, by associate time with data, it is possible to store the different database states. In essence, temporal data is formed by time-stamping ordinary data (type of data we associate and store in conventional databases). In a relational data model, tuples are time-stamped and in an object-oriented data model, objects/attributes are time stamped. Each ordinary data has two time values attached to it, a start time and an end time to establish the time interval of the data. In a relational data model, relations are extended to have two additional attributes, one for start time and another for end time. Different Forms of Temporal Databases Time can be interpreted as valid time (when data occurred or is true in reality) or transaction time (when data was entered into the database).
 a historical database stores data with respect to valid time.
 a rollback database stores data with respect to transaction time.
 a bitemporal database stores data with respect to both valid and transaction time –

They store the history of data with respect to valid time and transaction time. A central goal of conventional relational database design is to produce a database schema consisting of a set of relational schemas. In normalization theory, normal forms constitute attempts at characterizing ―good‖ relation schemas, and a wide variety of normal forms has been proposed, the most prominent being third normal form and Boyce-Codd normal form. An extensive theory has been developed to provide a solid formal footing for relational database design, and most database textbooks expose their readers to the core of this theory. In temporal databases, there is an even greater need for database design guidelines. However, the conventional normalization concepts are not applicable to temporal relational data models because these models employ relational structures different from conventional relations. New temporal normal forms and underlying concepts that may serve as guidelines during temporal database design are needed. Temporal data models generally define time slice operators, which may be used to determine the snapshots contained in a temporal relation. Accepting a temporal relation as their argument and a time point as their parameter, these operators return the snapshot of the relation corresponding to the specified time point. Adopting a longer term and more abstract perspective, it is likely that new database management technologies and application areas will continue to emerge that provide ‗temporal ‘challenges. Due to the ubiquity of time and its importance to most database management applications, and because built-in temporal support generally offers many benefits and is challenging to provide, research in the temporal aspects of new database management technologies will continue to flourish for existing as well as new application areas.


3-MOBILE DATABASE
The rapid technological development of mobile phones (cell phones), wireless and satellite communications and increased mobility of individual users have resulted into increasing demand for mobile computing. Portable computing devices such as laptop computers, palmtop computers and so on coupled with wireless communications allow clients to access data from virtually anywhere and at any time in the globe. The mobile databases interfaced with these developments, offer the users such as CEOs, marketing professionals, finance managers and others to access any data, anywhere, at any time to take business decisions in real-time. Mobile databases are especially useful to geographically dispersed organisations.
The flourishing of the mobile devices is driving businesses to deliver data to employees and customers wherever they may be. The potential of mobile gear with mobile data is enormous. A salesperson equipped with a PDA running corporate databases can check order status, sales history and inventory instantly from the client’s site. And drivers can use handheld computers to log deliveries and report order changes for a more efficient supply chain

Recent advances in portable and wireless technology led to mobile computing, a new dimension in data communication and processing. Portable computing devices coupled with wireless communications allow clients to access data from virtually anywhere and at any time. Now days you can even connect to your Intranet from an aero plane. Mobile database are the database that allows the development and deployment of database applications for handheld devices, thus, enabling relational database based applications in the hands of mobile workers. The database technology allows employees using handheld to link to their corporate networks, download data, work offline, and then connect to the network again to synchronize with the corporate database. Mobile computing applications, residing fully or partially on mobile devices, typically use cellular networks to transmit information over wide areas, and wireless LANs over short distances. Some of the commercially available Common Mobile relational Database systems are IBM's DB2 Everywhere 1.0, Oracle Lite, Sybase's SQL etc.
These databases work on Palm top and hand held devices (Windows CE devices) providing a local data store for the relational data acquired from enterprise SQL databases. The main constraints for such databases are relating to the size of the program as the handheld devices have RAM oriented constraints. The commercially available mobile database systems allow wide variety of platforms and data sources. They also allows users with handheld to synchronise with Open Database Connectivity (ODBC) database content, and personal information management data and email from Lotus Development's Notes or Microsoft's Exchange. These database technologies support either query-by-example (QBE) or SQL statements. Mobile computing has proved useful in many applications. Many business travelers are using laptop computers to enable them to work and to access data while traveling. Delivery services may use/ are using mobile computers to assist in tracking of delivery of goods. Emergency response services may use/ are using mobile computers at the disasters sites, medical emergencies, etc. to access information and to provide data pertaining to the situation. Newer applications of mobile computers are also emerging.


4-GEOGRAPHIC INFORMATION SYSTEMS

GIS is a technological field that incorporates geographical features with tabular data in order to map, analyses, and assess real-world problems. The key word to this technology is Geography – this means that some portion of the data is spatial. In other words, data that is in some way referenced to locations on the earth. Coupled with this data is usually tabular data known as attribute data. Attribute data can be generally defined as additional information about each of the spatial features. Geographic information systems (GIS) are used to collect, model, and analyses information describing physical properties of the geographical world. The scope of GIS broadly encompasses two types of data:
 Spatial data, originating from maps, digital images, administrative and political boundaries, roads, transportation networks, physical data, such as rivers, soil characteristics, climatic regions, land elevations, and
 Non spatial data, such as socio-economic data (like census counts), economic data, and sales or marketing information. GIS is a rapidly developing domain that offers highly innovative approaches to meet some challenging technical demands.

GIS Applications can be divided into three categories
 Cartographic applications
 Digital terrain modelling applications
 geographic objects applications


Figure  shows GIS categories and grouping of different GIS application areas. GIS data can be broadly represented in two formats, Vector data and Raster data. Vector data represents geometric objects such as points, lines and polygons. Raster data is characterized as an array of points, where each point represents the value of an attribute for a real-world location. Informally, raster images are n-dimensional array where each entry is a unit of the image and represents an attribute. Two-dimensional units are called pixels, while three-dimensional units are called voxels. Three-dimensional elevation data is stored in a raster-based digital elevation model (DEM) format. Another raster format called triangular irregular network (TIN) is a topological vector-based approach that models surfaces by connecting sample points as vector of triangles and has a point density that may vary with the roughness of the terrain. Rectangular grids (or elevation matrices) are two-dimensional array structures.


5-GENOME DATA

The biological sciences encompass an enormous variety of information. Environmental science gives us a view of how species live and interact in a world filled with natural phenomena. Biology and ecology study particular species. Anatomy focuses on the overall structure of an organism, documenting the physical aspects of individual bodies. Traditional medicine and physiology break the organism into systems and tissues and strive to collect information on the workings of these systems and the organism as a whole. Histology and cell biology delve into the tissue and cellular levels and provide knowledge about the inner structure and function of the cell. This wealth of information that has been generated, classified, and stored for centuries has only recently become a major application of database technology. Genetics has emerged as an ideal field for the application of information technology. In a broad sense, it can be taught of as the construction of models based on information about genes – which can be defined as units of heredity – and population and the seeking out of relationships in that information. The study of genetics can be divided into three branches:
 Mendelian genetics. This is the study of the transmission of traits between generations.
 Molecular genetics. This is the study of the chemical structure and function of genes at the molecular level.
 Population genetics. This is the study of how genetic information varies across populations of organisms.

Biological data exhibits many special characteristics that make management of biological information a particularly challenging problem. The characteristics related to biological information, and focusing on a multidisciplinary field called bioinformatics that has emerged. Bioinformatics addresses information management of genetic information with special emphasis on DNA sequence analysis. Applications of bioinformatics span design of targets for drugs, study of mutations and related diseases, anthropological investigations on migration patterns of tribes and therapeutic treatments. The term genome is defined as the total genetic information that can be obtained about an entity. The human genome, for example, generally refers to the complete set of genes required to create a human being –estimated to be more than 30,000 genes spread over 23 pairs of chromosomes, with an estimated 3 to 4 billion nucleotides. The goal of the Human Genome Project (HGP) has been to obtain the complete sequence – the ordering of the bases – of those nucleotides.



6-DIGITAL LIBRARY

Digital libraries are an important and active research area. Conceptually, a digital library is an analog of a traditional library-a large collection of information sources in various media-coupled with the advantages of traditional technologies. However, digital libraries differ from their traditional counter-parts in significant ways: storage is digital, remote access is quick and easy, and materials are copied from a master version. Furthermore, keeping extra copies on hand is easy and is not hampered by budget and storage restrictions, which are major problems in traditional libraries. Thus, digital technologies overcome many of the physical and economic limitations of traditional libraries. The Digital Library Initiative (DLI), jointly focused by SNF, DARPA, and NASA, has been a major accelerator of the development of digital libraries. This initiative provided significant funding to six major projects at six universities in its first phase covering a broad spectrum of enabling technologies. The initiative‘s web page define its focus as ―dramatically advance the means to collect, store, and organize information in digital forms, and make it available for searching, retrieval, and processing via communication networks-all in user-friendly ways. The magnitude of these data collections as well as their diversity and multiple formats provides challenges on a new scale. The future progression of the development of digital libraries is likely to move from the present technology of retrieval via the internet, though net searches of indexed information in repositories, to a time of information correlation and analysis by intelligent networks. Techniques for collecting information, storing it, and organizing it to support informational requirements learned in decades of design and implementation of database will provide the baseline for development of approaches appropriate for digital libraries.

7- BIG DATA

Now a days advancement of technology generate large, diverse, longitudinal, complex, and/or distributed data sets mainly from instruments, sensors, Internet transactions, email, video, click streams, and/or all other digital sources. Individuals with smartphones and on social network sites and multimedia will continue to fuel exponential growth of data. The large pools of data that can be captured, communicated, aggregated, stored, and analysed is part of every sector and function of the global economy. This amount of data has been exploding. Companies capture trillions of bytes of information about their customers, suppliers, and operations, and millions of networked sensors are being embedded in the physical world in devices such as mobile phones and automobiles, sensing, creating, and communicating data. Multimedia and individuals with smartphones and on social network sites will continue to fuel exponential growth. Big data—large pools of data that can be captured, communicated, aggregated, stored, and analysed—is now part of every sector and function of the global economy. Like other essential factors of production
The three characteristics of big data: volume, velocity and variety

Such as hard assets and human capital, it is increasingly the case that much of modern economic activity, innovation, and growth simply couldn‘t take place without data. Big data represents a sea change in the technology we draw upon for making decisions. Organizations will integrate and analyse data from diverse sources, complementing enterprise databases with data from social media, video, smart mobile devices, and other sources. The evolution of information architectures to include big data will likely provide the foundation for a new generation of enterprise infrastructure. To exploit these diverse sources of data for decision-making, an organization must develop an effective strategy for acquiring, organizing, and analysing big data, using it to generate new insights about the business and make better decisions. The previously nebulous definition of ―big data‖ is growing more concrete as it becomes the focus of more applications. As seen in Figure 2 (below), volume, velocity and variety make up three key characteristics of big data:
 Volume. Rather than just capturing business transactions and moving samples and aggregates to another database for analysis, applications now capture all possible data for analysis.
 Velocity. Traditional transaction-processing applications might have captured transactions in real time from end users, but newer applications are increasingly capturing data streaming in from other systems or even sensors. Traditional applications also move their data to an enterprise data warehouse through a deliberate and careful process that generally focuses on historical analysis.
 Variety. The variety of data is much richer now, because data no longer comes solely from business transactions. It often comes from machines, sensors and unrefined sources, making it much more complex to manage.

8-NOSQL DATABASES

The term NoSQL has been around for just a few years and was invented to provide a descriptor for a variety of database technologies that emerged to cater for what is known as "Web-scale" or "Internet-scale" demands. In computing, NoSQL (commonly interpreted as "not only SQL") is a broad class of database management systems identified by non-adherence to the widely used relational database management system model. NoSQL databases are not built primarily on tables, and generally do not use SQL for data manipulation. NoSQL database systems are often highly optimized for retrieval and appending operations and often offer little functionality beyond record storage (e.g. key–value stores). The reduced run-time flexibility compared to full SQL systems is compensated by marked gains in scalability and performance for certain data models. In short, NoSQL database management systems are useful when working with a huge quantity of data when the data's nature does not require a relational model. The data can be structured, but NoSQL is used when what really matters is the ability to store and retrieve great quantities of data, not the relationships between the elements. Usage examples might be to store millions of key–value pairs in one or a few associative arrays or to store millions of data records. The fledgling NoSQL marketplace is going through a rapid transition – from the predominantly community-driven platform development to a more mature application-driven market. Scaling up web infrastructure on NoSQL basis have proven successful for Facebook, Digg and Twitter. Successful attempts have been made to develop NOSQL applications in the biotechnology, defence and image/signal processing. Interest in using key-value pair (KVP) technology has re-emerged to the point where the traditional RDMS vendors evaluate strategy of developing in-house NoSQL solutions and integrating them in current product offers. It will not take long before we‘ll see acquisitions driven by emerging NoSQL technology. The future deals will likely be made to better compete both in platform offering and in vertical market segments.







CONCLUSIONS

Applications in domains such as Multimedia, Geographical Information Systems, digital libraries, and big data demand a completely different set of requirements in terms of the underlying database models which conventional relational database can no longer handle. The conventional relational database model is no longer appropriate for these types of data. Furthermore the volume of data is typically significantly larger than in classical database systems. Finally, indexing, retrieving and analyzing these data types require specialized functionality, which is not available in conventional database systems. Hence, a new direction, such as described above, in DBMS is necessary