Parallel
computing is a form of computation in the which many calculations are
Carried out simultaneously, operating on the principle that large
problems can be divided into smaller Often ones, the which are then
solved concurrently ("in parallel"). There are several different forms of parallel computing: bit-level, instruction level, data, and task parallelism. Parallelism
has been employed for many years, mainly in high-performance computing,
but interest in it has grown lately due to the physical constraints
Preventing frequency scaling. As power consumption (and consequently heat generation) by computers
has Become a concern in recent years, parallel computing has Become the
dominant paradigm in computer architecture, mainly in the form of
multi-core processors.
A. Types of parallelism
Bit-level parallelism
From
the advent of very-large-scale integration (VLSI) computer-chip
fabrication technology in the 1970s until about 1986, a speed-up in
computer architecture was driven by doubling computer word size-the
amount of information the processor can manipulate per cycle. Increasing
the word size Reduces the number of instructions the processor must
execute to perform an operation on variables Whose sizes are greater
than the length of the word. For
example, where an 8-bit processor must add two 16-bit integers, the
processor must first add the 8 lower-order bits from each integer using
the standard addition instruction, then add the 8 higher-order bits
using an add-with -carry instruction and the carry bit from the addition of lower order; Thus, an 8-bit processor requires two instructions to complete a
single operation, where a 16-bit processor would be Able to complete the
operation with a single instruction.
Instruction-level parallelism
A canonical five-stage pipeline in a RISC machine (IF = Instruction
Fetch, ID = Instruction Decode, EX = Execute, MEM = Memory access, WB =
Register write back)
A computer program, is in essence, a stream of instructions executed by a processor. These
instructions can be re-ordered and combined into groups roomates are
then executed in parallel without changing the result of the program. This is known as instruction-level parallelism. Advances in instruction-level parallelism dominated computer architecture from the mid-1980s until the mid-1990s.
Modern processors have multi-stage instruction pipelines. Each stage in the pipeline corresponds to a different action the processor performs on that instruction in that stage; a processor with an N-stage pipeline can have up to N different instructions at different stages of completion. The
canonical example of a pipelined processor is a RISC processor, with
five stages: instruction fetch, decode, execute, memory access, and
write back. The Pentium 4 processor had a 35-stage pipeline.
Task parallelism
Task
parallelism is the characteristic of a parallel program that "entirely
different calculations can be performed on either the same or different
sets of data". This contrasts with the data parallelism, where the same calculation is performed on the same or different sets of data. Task parallelism does not usually scale with the size of a problem.
B. Distributed computing
A
distributed computer (also known as a distributed memory
multiprocessor) is a distributed memory computer system in the which the
processing elements are connected by a network. Distributed computers are highly scalable.
C. Architectural Parallel computer
A logical view of a Non-Uniform Memory Access (NUMA) architecture. Processors in one directory can access that directory's memory with
less latency than they can access memory in the other directory's
memory.
D. Parallel programming languages
Concurrent
programming languages, libraries, APIs, and parallel programming models
(such as Algorithmic Skeletons) have been created for programming
parallel computers. Generally
these can be divided into classes based on the makeup they Assumptions
about the underlying memory architecture-shared memory, distributed
memory, shared or distributed memory. Shared memory programming languages communicate by manipulating shared memory variables. Distributed memory uses message passing. POSIX
Threads and OpenMP are two of most Widely used shared memory APIs,
whereas Message Passing Interface (MPI) is the most Widely used
message-passing system APIs. One concept used in programming parallel programs is the future
concept, where one part of a program promises to deliver a required
datum to another part of a program at some future time.
CAPS
entreprise and Pathscale are also coordinating their effort to the make
HMPP (Hybrid Multicore Parallel Programming) directives OpenHMPP called
an Open Standard. The
OpenHMPP directive-based programming model offers a syntax to
efficiently offload computations on hardware accelerators and to
optimize data movement to / from the hardware memory. OpenHMPP directives describe remote procedure call (RPC) on an accelerator device (eg GPU) or more Generally a set of cores. The directives annotate C or Fortran codes to describe two sets of
functionalities: the offloading of procedures (denoted codelets) onto a
remote device and the optimization of the data transfers between the CPU
main memory and the accelerator memory.
E. introductory programming cuda gpu
CUDA
(Compute Unified Device Architecture) is a parallel computing platform
and programming model created by NVIDIA and implemented a by the
graphics processing units (GPUs) that they produce. CUDA program gives developers direct access to the virtual instruction
set and memory of the parallel computational elements in CUDA GPUs.
Using CUDA, the GPUs can be used for general purpose processing (ie, not exclusively graphics); this approach is known as GPGPU. Unlike CPUs however, GPUs have a parallel throughput architecture that
emphasizes executing many concurrent threads slowly, rather than
executing a single thread very quickly intervening.
General-Purpose
Computing on Graphics Processing Units (GPGPU, rarely GPGP or GP ² U)
is the utilization of a graphics processing unit (GPU), the which
typically handles computation only for computer graphics, to perform
computation in applications traditionally handled by the central
processing unit ( CPU).
Any GPU providing a functionally complete set of operations performed on arbitrary bits can compute any computable value. Additionally, the use of multiple graphics cards in one computer, or
large numbers of graphics chips, further parallelizes the already
parallel nature of graphics processing.
OpenCL is the currently dominant open general-purpose GPU computing language. The dominant framework is Nvidia's proprietary CUDA.
References: http://en.wikipedia.org/wiki/
Senin, 23 Juni 2014
Selasa, 13 Mei 2014
Quantum Computation
First proposed in the 1970s, quantum computing relies on quantum
physics by taking advantage of certain quantum physics properties of
atoms or nuclei that allow them to work together as quantum bits, or qubits, to be the computer’s processor and memory.
By interacting with each other while being isolated from the external
environment, qubits can perform certain calculations exponentially
faster than conventional computers.
Qubits do not rely on the traditional binary nature of computing. While traditional computers encode information into bits using binary numbers, either a 0 or 1, and can only do calculations on one set of numbers at once, quantum computers encode information as a series of quantum-mechanical states such as spin directions of electrons or polarization orientations of a photon that might represent a 1 or a 0, might represent a combination of the two or might represent a number expressing that the state of the qubit is somewhere between 1 and 0, or a superposition of many different numbers at once. A quantum computer can do an arbitrary reversible classical computation on all the numbers simultaneously, which a binary system cannot do, and also has some ability to produce interference between various different numbers. By doing a computation on many different numbers at once, then interfering the results to get a single answer, a quantum computer has the potential to be much more powerful than a classical computer of the same size. In using only a single processing unit, a quantum computer can naturally perform myriad operations in parallel.Quantum computing is not well suited for tasks such as word processing and email, but it is ideal for tasks such as cryptography and modeling and indexing very large databases
Entanglement :
Quantum entanglement is a physical phenomenon that occurs when pairs or groups of particles are generated or interact in ways such that the quantum state of each particle cannot be described independently – instead, a quantum state may be given for the system as a whole.
Measurements of physical properties such as position, momentum, spin, polarization, etc. performed on entangled particles are found to be appropriately correlated. For example, if a pair of particles is generated in such a way that their total spin is known to be zero, and one particle is found to have clockwise spin on a certain axis, then the spin of the other particle, measured on the same axis, will be found to be counterclockwise. Because of the nature of quantum measurement, however, this behavior gives rise to effects that can appear paradoxical: any measurement of a property of a particle can be seen as acting on that particle (e.g. by collapsing a number of superimposed states); and in the case of entangled particles, such action must be on the entangled system as a whole. It thus appears that one particle of an entangled pair “knows” what measurement has been performed on the other, and with what outcome, even though there is no known means for such information to be communicated between the particles, which at the time of measurement may be separated by arbitrarily large distances.
Such phenomena were the subject of a 1935 paper by Albert Einstein, Boris Podolsky and Nathan Rosen, describing what came to be known as the EPR paradox, and several papers by Erwin Schrödinger shortly thereafter. Einstein and others considered such behavior to be impossible, as it violated the local realist view of causality (Einstein referred to it as “spooky action at a distance”), and argued that the accepted formulation of quantum mechanics must therefore be incomplete. Later, however, the counterintuitive predictions of quantum mechanics were verified experimentally. Experiments have been performed involving measuring the polarization or spin of entangled particles in different directions, which – by producing violations of Bell’s inequality – demonstrate statistically that the local realist view cannot be correct. This has been shown to occur even when the measurements are performed more quickly than light could travel between the sites of measurement: there is no lightspeed or slower influence that can pass between the entangled particles. Recent experiments have measured entangled particles within less than one part in 10,000 of the light travel time between them. According to the formalism of quantum theory, the effect of measurement happens instantly. It is not possible, however, to use this effect to transmit classical information at faster-than-light speeds (see Faster-than-light → Quantum mechanics).
Quantum entanglement is an area of extremely active research by the physics community, and its effects have been demonstrated experimentally with photons, electrons, molecules the size of buckyballs, and even small diamonds. Research is also focused on the utilization of entanglement effects in communication and computation.
Operation Data Qubit
Quantum information science begins with the fundamental resources generalize classical bits of information into quantum bits, or qubits. As bits are ideal objects are abstracted from the principles of classical physics, qubits are quantum objects are abstracted ideal of the principles of quantum mechanics. Can be represented by a bit – magnetic region on the disc, the voltage on the circuit, or sign graphite pencil on paper made. Functioning of classical physical statuses, as bits do not depend on the details of how they are realized. Similarly, the attributes qubit is independent of specific physical representation of the atomic nucleus as a centrifuge or say, the polarization of a photon of light.
Illustrated by the status bits, 0 or 1. Similarly, the qubit is described by its quantum status. Two potential for qubit quantum state is equivalent to the classical bits 0 and 1. But in quantum mechanics, any object that has two different statuses certainly has a series of other potential state, called superposition, which led up to the status of second -degree manifold. Qubit statuses are allowed exactly is all the status that must be achieved, in principle, by the classical bits are transplanted into the quantum world. Status – qubit state is equivalent to the points on the surface of the ball, where 0 and 1 as the south and north poles. Continuum between 0 and 1 status fostered many outstanding attributes of quantum information.
Unlike many classical logic gates, quantum logic gates are reversible. However, classical computing can be performed using only reversible gates. For example, the reversible Toffoli gate can implement all Boolean functions. This gate has a direct quantum equivalent, showing that quantum circuits can perform all operations performed by classical circuits.
Quantum logic gates are represented by unitary matrices. The most common quantum gates operate on spaces of one or two qubits, just like the common classical logic gates operate on one or two bits. This means that as matrices, quantum gates can be described by 2 × 2 or 4 × 4 unitary matrices.
Shor Algorithm :
Shor’s algorithm, named after mathematician Peter Shor, is a quantum algorithm (an algorithm that runs on a quantum computer) for integer factorization formulated in 1994. Informally it solves the following problem: Given an integer N, find its prime factors.
On a quantum computer, to factor an integer N, Shor’s algorithm runs in polynomial time (the time taken is polynomial in log N, which is the size of the input). Specifically it takes time O((log N)3), demonstrating that the integer factorization problem can be efficiently solved on a quantum computer and is thus in the complexity class BQP. This is substantially faster than the most efficient known classical factoring algorithm, the general number field sieve, which works in sub-exponential time — about O(e1.9 (log N)1/3 (log log N)2/3). The efficiency of Shor’s algorithm is due to the efficiency of the quantum Fourier transform, and modular exponentiation by repeated squarings.
If a quantum computer with a sufficient number of qubits could operate without succumbing to noise and other quantum decoherence phenomena, Shor’s algorithm could be used to break public-key cryptography schemes such as the widely used RSA scheme. RSA is based on the assumption that factoring large numbers is computationally infeasible. So far as is known, this assumption is valid for classical (non-quantum) computers; no classical algorithm is known that can factor in polynomial time. However, Shor’s algorithm shows that factoring is efficient on an ideal quantum computer, so it may be feasible to defeat RSA by constructing a large quantum computer. It was also a powerful motivator for the design and construction of quantum computers and for the study of new quantum computer algorithms. It has also facilitated research on new cryptosystems that are secure from quantum computers, collectively called post-quantum cryptography.
In 2001, Shor’s algorithm was demonstrated by a group at IBM, who factored 15 into 3 × 5, using an NMR implementation of a quantum computer with 7 qubits. However, some doubts have been raised as to whether IBM’s experiment was a true demonstration of quantum computation, since no entanglement was observed.[4] Since IBM’s implementation, several other groups have implemented Shor’s algorithm using photonic qubits, emphasizing that entanglement was observed. In 2012, the factorization of 15 was repeated. Also in 2012, the factorization of 21 was achieved, setting the record for the largest number factored with a quantum computer. In April 2012, the factorization of 143 was achieved, although this used adiabatic quantum computation rather than Shor’s algorithm.
sumber : en.wikipedia.org
Qubits do not rely on the traditional binary nature of computing. While traditional computers encode information into bits using binary numbers, either a 0 or 1, and can only do calculations on one set of numbers at once, quantum computers encode information as a series of quantum-mechanical states such as spin directions of electrons or polarization orientations of a photon that might represent a 1 or a 0, might represent a combination of the two or might represent a number expressing that the state of the qubit is somewhere between 1 and 0, or a superposition of many different numbers at once. A quantum computer can do an arbitrary reversible classical computation on all the numbers simultaneously, which a binary system cannot do, and also has some ability to produce interference between various different numbers. By doing a computation on many different numbers at once, then interfering the results to get a single answer, a quantum computer has the potential to be much more powerful than a classical computer of the same size. In using only a single processing unit, a quantum computer can naturally perform myriad operations in parallel.Quantum computing is not well suited for tasks such as word processing and email, but it is ideal for tasks such as cryptography and modeling and indexing very large databases
Entanglement :
Quantum entanglement is a physical phenomenon that occurs when pairs or groups of particles are generated or interact in ways such that the quantum state of each particle cannot be described independently – instead, a quantum state may be given for the system as a whole.
Measurements of physical properties such as position, momentum, spin, polarization, etc. performed on entangled particles are found to be appropriately correlated. For example, if a pair of particles is generated in such a way that their total spin is known to be zero, and one particle is found to have clockwise spin on a certain axis, then the spin of the other particle, measured on the same axis, will be found to be counterclockwise. Because of the nature of quantum measurement, however, this behavior gives rise to effects that can appear paradoxical: any measurement of a property of a particle can be seen as acting on that particle (e.g. by collapsing a number of superimposed states); and in the case of entangled particles, such action must be on the entangled system as a whole. It thus appears that one particle of an entangled pair “knows” what measurement has been performed on the other, and with what outcome, even though there is no known means for such information to be communicated between the particles, which at the time of measurement may be separated by arbitrarily large distances.
Such phenomena were the subject of a 1935 paper by Albert Einstein, Boris Podolsky and Nathan Rosen, describing what came to be known as the EPR paradox, and several papers by Erwin Schrödinger shortly thereafter. Einstein and others considered such behavior to be impossible, as it violated the local realist view of causality (Einstein referred to it as “spooky action at a distance”), and argued that the accepted formulation of quantum mechanics must therefore be incomplete. Later, however, the counterintuitive predictions of quantum mechanics were verified experimentally. Experiments have been performed involving measuring the polarization or spin of entangled particles in different directions, which – by producing violations of Bell’s inequality – demonstrate statistically that the local realist view cannot be correct. This has been shown to occur even when the measurements are performed more quickly than light could travel between the sites of measurement: there is no lightspeed or slower influence that can pass between the entangled particles. Recent experiments have measured entangled particles within less than one part in 10,000 of the light travel time between them. According to the formalism of quantum theory, the effect of measurement happens instantly. It is not possible, however, to use this effect to transmit classical information at faster-than-light speeds (see Faster-than-light → Quantum mechanics).
Quantum entanglement is an area of extremely active research by the physics community, and its effects have been demonstrated experimentally with photons, electrons, molecules the size of buckyballs, and even small diamonds. Research is also focused on the utilization of entanglement effects in communication and computation.
Operation Data Qubit
Quantum information science begins with the fundamental resources generalize classical bits of information into quantum bits, or qubits. As bits are ideal objects are abstracted from the principles of classical physics, qubits are quantum objects are abstracted ideal of the principles of quantum mechanics. Can be represented by a bit – magnetic region on the disc, the voltage on the circuit, or sign graphite pencil on paper made. Functioning of classical physical statuses, as bits do not depend on the details of how they are realized. Similarly, the attributes qubit is independent of specific physical representation of the atomic nucleus as a centrifuge or say, the polarization of a photon of light.
Illustrated by the status bits, 0 or 1. Similarly, the qubit is described by its quantum status. Two potential for qubit quantum state is equivalent to the classical bits 0 and 1. But in quantum mechanics, any object that has two different statuses certainly has a series of other potential state, called superposition, which led up to the status of second -degree manifold. Qubit statuses are allowed exactly is all the status that must be achieved, in principle, by the classical bits are transplanted into the quantum world. Status – qubit state is equivalent to the points on the surface of the ball, where 0 and 1 as the south and north poles. Continuum between 0 and 1 status fostered many outstanding attributes of quantum information.
Quantum gate :
In quantum computing and specifically the quantum circuit model of computation, a quantum gate (or quantum logic gate) is a basic quantum circuit operating on a small number of qubits. They are the building blocks of quantum circuits, like classical logic gates are for conventional digital circuits.Unlike many classical logic gates, quantum logic gates are reversible. However, classical computing can be performed using only reversible gates. For example, the reversible Toffoli gate can implement all Boolean functions. This gate has a direct quantum equivalent, showing that quantum circuits can perform all operations performed by classical circuits.
Quantum logic gates are represented by unitary matrices. The most common quantum gates operate on spaces of one or two qubits, just like the common classical logic gates operate on one or two bits. This means that as matrices, quantum gates can be described by 2 × 2 or 4 × 4 unitary matrices.
Shor Algorithm :
Shor’s algorithm, named after mathematician Peter Shor, is a quantum algorithm (an algorithm that runs on a quantum computer) for integer factorization formulated in 1994. Informally it solves the following problem: Given an integer N, find its prime factors.
On a quantum computer, to factor an integer N, Shor’s algorithm runs in polynomial time (the time taken is polynomial in log N, which is the size of the input). Specifically it takes time O((log N)3), demonstrating that the integer factorization problem can be efficiently solved on a quantum computer and is thus in the complexity class BQP. This is substantially faster than the most efficient known classical factoring algorithm, the general number field sieve, which works in sub-exponential time — about O(e1.9 (log N)1/3 (log log N)2/3). The efficiency of Shor’s algorithm is due to the efficiency of the quantum Fourier transform, and modular exponentiation by repeated squarings.
If a quantum computer with a sufficient number of qubits could operate without succumbing to noise and other quantum decoherence phenomena, Shor’s algorithm could be used to break public-key cryptography schemes such as the widely used RSA scheme. RSA is based on the assumption that factoring large numbers is computationally infeasible. So far as is known, this assumption is valid for classical (non-quantum) computers; no classical algorithm is known that can factor in polynomial time. However, Shor’s algorithm shows that factoring is efficient on an ideal quantum computer, so it may be feasible to defeat RSA by constructing a large quantum computer. It was also a powerful motivator for the design and construction of quantum computers and for the study of new quantum computer algorithms. It has also facilitated research on new cryptosystems that are secure from quantum computers, collectively called post-quantum cryptography.
In 2001, Shor’s algorithm was demonstrated by a group at IBM, who factored 15 into 3 × 5, using an NMR implementation of a quantum computer with 7 qubits. However, some doubts have been raised as to whether IBM’s experiment was a true demonstration of quantum computation, since no entanglement was observed.[4] Since IBM’s implementation, several other groups have implemented Shor’s algorithm using photonic qubits, emphasizing that entanglement was observed. In 2012, the factorization of 15 was repeated. Also in 2012, the factorization of 21 was achieved, setting the record for the largest number factored with a quantum computer. In April 2012, the factorization of 143 was achieved, although this used adiabatic quantum computation rather than Shor’s algorithm.
sumber : en.wikipedia.org
Senin, 28 April 2014
Cloud Computing
1.
Jelaskan
gambaran umum dari cloud computing ?
Jawab : Cloud computing in general can be portrayed as a synonym for distributed computing over a network, with
the ability to run a program or application on many connected computers at the
same time. It specifically refers to a computing hardware machine or group of
computing hardware machines commonly referred as a server
connected through a communication network such as the Internet,
an intranet,
a local area network (LAN) or wide area
network (WAN) and individual users or user who have permission to
access the server can use the server's processing power for their individual
computing needs like to run an application, store data or any other computing
need.
2.
Manfaat
cloud computing
Jawab : *Scalability, cloud computing is with us to add to our
data storage capacity without having to purchase additional equipment , such as
hard drives , etc. . We simply add the capacity provided by the cloud computing
service providers .
*Accessibility , ie we can access data whenever and wherever we are , as long as we are connected to the Internet , making it easier for us to access the data when important .
*Security , we can be assured that the data security of its cloud computing service provider , so for IT based company , the data can be stored securely in the cloud computing provider . It also reduces the cost required to secure corporate data .
*Creation , ie the user can do / develop their creations or projects without having to submit their projects directly to the company , but the user can send it through the cloud computing service providers .
*Anxiety , when a natural disaster strikes our proprietary data stored safely in the cloud even though we damaged hard drive or gadget
*Accessibility , ie we can access data whenever and wherever we are , as long as we are connected to the Internet , making it easier for us to access the data when important .
*Security , we can be assured that the data security of its cloud computing service provider , so for IT based company , the data can be stored securely in the cloud computing provider . It also reduces the cost required to secure corporate data .
*Creation , ie the user can do / develop their creations or projects without having to submit their projects directly to the company , but the user can send it through the cloud computing service providers .
*Anxiety , when a natural disaster strikes our proprietary data stored safely in the cloud even though we damaged hard drive or gadget
3.
Prinsip
kerja cloud computing
Jawab : With Cloud Computing is no longer a local computer should run the heavy
computational work required to run the application , no need to install a
software package for every computer , we only perform the installation of the
operating system on application . Computer networks that make up the cloud (
Internet ) handles them instead . This server will be running all applications
ranging from e - mail , word processing , to complex data analysis programs .
When users access the cloud ( internet ) for a popular website , many things
can happen . Users of Internet Protocol ( IP ) for example can be used to
determine where the user is located ( geolocation ) . Domain Name System ( DNS
) services can then redirect the user to a server cluster that is close to the
users so that the site can be accessed quickly and in their local language .
The user is not logged into the server , but they login to their services using
a session id or cookie that has been obtained is stored in their browser . What
users see in the browser usually comes from a web server .
4.
Karakteristiknya
Jawab : Cloud computing exhibits the
following key characteristics:
- Agility improves with users' ability to re-provision technological infrastructure resources.
- Application programming interface (API) accessibility to software that enables machines to interact with cloud software in the same way that a traditional user interface (e.g., a computer desktop) facilitates interaction between humans and computers. Cloud computing systems typically use Representational State Transfer (REST)-based APIs.
- Cost: cloud providers claim that computing costs reduce. A public-cloud delivery model converts capital expenditure to operational expenditure. This purportedly lowers barriers to entry, as infrastructure is typically provided by a third party and does not need to be purchased for one-time or infrequent intensive computing tasks. Pricing on a utility computing basis is fine-grained, with usage-based options and fewer IT skills are required for implementation (in-house). The e-FISCAL project's state-of-the-art repository contains several articles looking into cost aspects in more detail, most of them concluding that costs savings depend on the type of activities supported and the type of infrastructure available in-house.
- Device and location independence enable users to access systems using a web browser regardless of their location or what device they use (e.g., PC, mobile phone). As infrastructure is off-site (typically provided by a third-party) and accessed via the Internet, users can connect from anywhere.
- Virtualization technology allows sharing of servers and storage devices and increased utilization. Applications can be easily migrated from one physical server to another.
- Multitenancy enables sharing of resources and costs across a large pool of users thus allowing for:
- centralization of infrastructure in locations with lower costs (such as real estate, electricity, etc.)
- peak-load capacity increases (users need not engineer for highest possible load-levels)
- utilisation and efficiency improvements for systems that are often only 10–20% utilised.
- Reliability improves with the use of multiple redundant sites, which makes well-designed cloud computing suitable for business continuity and disaster recovery.
- Scalability and elasticity via dynamic ("on-demand") provisioning of resources on a fine-grained, self-service basis in near real-time[45](Note, the VM startup time varies by VM type, location, os and cloud providers), without users having to engineer for peak loads.
- Performance is monitored, and consistent and loosely coupled architectures are constructed using web services as the system interface.
- Security can improve due to centralization of data, increased security-focused resources, etc., but concerns can persist about loss of control over certain sensitive data, and the lack of security for stored kernels. Security is often as good as or better than other traditional systems, in part because providers are able to devote resources to solving security issues that many customers cannot afford to tackle. However, the complexity of security is greatly increased when data is distributed over a wider area or over a greater number of devices, as well as in multi-tenant systems shared by unrelated users. In addition, user access to security audit logs may be difficult or impossible. Private cloud installations are in part motivated by users' desire to retain control over the infrastructure and avoid losing control of information security.
- Maintenance of cloud computing applications is easier, because they do not need to be installed on each user's computer and can be accessed from different places.
5.
Bagaimana
tingkat keamanannya
Jawab :
*Data Protection
When we 've decided to adoption or migration of data to the cloud , which is considered is how the cloud service providers provide protection against our data . By what method they are doing so that we are sure the data protection safe , but it is also the location of data storage is an important consideration which is to do with the Data Center . Ensured that they make data centers already certified / audited , for example, the location of the earthquake -free , my source of electricity standard 3 layers etc. .
* Security Control
When we 've decided to adoption or migration of data to the cloud , which is considered is how the cloud service providers provide protection against our data . By what method they are doing so that we are sure the data protection safe , but it is also the location of data storage is an important consideration which is to do with the Data Center . Ensured that they make data centers already certified / audited , for example, the location of the earthquake -free , my source of electricity standard 3 layers etc. .
* Security Control
After our data strongly protected , next is how the security of access to
our data (role ) , what is the procedure so that only those who are entitled to
access our data . Here teramsuk access to the workers / employees in the
service provider to our data .
*Compliance
*Compliance
Standards are applied to Cloud Computing service providers , eg for data
security using ISO 27001 , ITIL service provision wear , COBIT , Cloud Security
Alliance , including an international regulatory and government . So if there
is a breach of the settlement will be easy
*Multi - tenancy
One is the nature of Cloud computing resource sharing , well how when there are other tenants are committing fraud or leaking , what is the impact to our data there , it should be considered . Because physically , our data can be present in the same physical medium to another .
*Security Governance
It is more to governance policy of the service provider or we as users of the service , should be spelled out and its governance paka what should be defined here .
*Multi - tenancy
One is the nature of Cloud computing resource sharing , well how when there are other tenants are committing fraud or leaking , what is the impact to our data there , it should be considered . Because physically , our data can be present in the same physical medium to another .
*Security Governance
It is more to governance policy of the service provider or we as users of the service , should be spelled out and its governance paka what should be defined here .
6.
Konsep cara
kerjanya
Jawab : In a cloud computing system , there's a significant workload shift . Local
computers no longer have to do all the heavy lifting during use . This cloud computer
network that handles them instead . Hardware and software on the user's side
decrease . The only case in which a computer user must be able to run the
software interface is cloud computing system , which is simple as a Web browser
, and the cloud network will take care of the rest . What is cloud computingAda
good chance if you are already using some form of cloud computing . If you have
an e - mail with a Web services - based e - mail such as Hotmail , Yahoo! Mail
or Gmail , then you already have some experience with cloud computing . Instead
of running an e- mail program on your computer , you can log in to the e-mail
account remotely your site / can be controlled from elsewhere . The software
and storage for your account does not exist on your computer - but the service
and cloud computing .
sumber : wikipedia
Nama :
Priyanti Kusuma Sari
Npm :
55410402
Kelas :
4ia10
Selasa, 25 Maret 2014
Agent Part 1
Software Agent adalah entitas perangkat lunak yang didedikasikan untuk tujuan tertentu yang
memungkinkan user untuk mendelegasikan tugasnya secara mandiri, selanjutnya software agent
nantinya disebut agent saja. Agen bisa memiliki ide sendiri mengenai bagaimana menyelesaikan
suatu pekerjaan tertentu atau agenda tersendiri. Agen yang tidak berpindah ke host lain disebut
stationary agent.
Definisi agen yang lebih rinci, ditinjau dari sudut pandang sistem, adalah obyek perangkat lunak
yang:
1. Diletakan dalam lingkungan eksekusi
2. Memiliki sifat sebagai berikut :
1. Autonomy: Agent dapat melakukan tugas secara mandiri dan tidak dipengaruhi secara
langsung oleh user, agent lain ataupun oleh lingkungan (environment). Untuk mencapai
tujuan dalam melakukan tugasnya secara mandiri, agent harus memiliki kemampuan kontrol
terhadap setiap aksi yang mereka perbuat, baik aksi keluar maupun kedalam [Woolridge et.
al., 1995]. Dan satu hal yang penting yaitu mendukung autonomy dan masalah intelegensi
(intelligence) dari agent.
2. Intelligence, Reasoning, dan Learning: Setiap agent harus mempunyai standar minimum
untuk bisa disebut agent, yaitu intelegensi (intelligence). Dalam konsep intelligence, ada tiga
komponen yang harus dimiliki: internal knowledge base, kemampuan reasoning berdasar
pada knowledge base yang dimiliki, dan kemampuan learning untuk beradaptasi dalam
perubahan lingkungan.
3. Mobility dan Stationary: Khusus untuk mobile agent, dia harus memiliki kemampuan yang
merupakan karakteristik tertinggi yang dia miliki yaitu mobilitas. Berbeda dengan stationary
agent. Tetapi keduanya tetap harus memiliki kemampuan untuk mengirim pesan dan
berkomunikasi dengan agent lain.
4. Delegation: Sesuai dengan namanya dan seperti yang sudah kita bahas pada bagian definisi,
agent bergerak dalam kerangka menjalankan tugas yang diperintahkan oleh user. Fenomena
pendelegasian (delegation) ini adalah karakteristik utama suatu program disebut agent.
5. Reactivity: Karakteristik agent yang lain adalah kemampuan untuk bisa cepat beradaptasi
dengan adanya perubahan informasi yang ada dalam suatu lingkungan (enviornment).
Lingkungan itu bisa mencakup: agent lain, user, informasi dari luar, dsb [Brenner et. al.,
1998].
6. Proactivity dan Goal-Oriented: Sifat proactivity boleh dibilang adalah kelanjutan dari sifat
reactivity. Agent tidak hanya dituntut bisa beradaptasi terhadap perubahan lingkungan,
tetapi juga harus mengambil inisiatif langkah penyelesaian apa yang harus diambil [Brenner
et. al., 1998]. Untuk itu agent harus didesain memiliki tujuan (goal) yang jelas, dan selalu
berorientasi kepada tujuan yang diembannya (goal-oriented).
7. Communication and Coordination Capability: Agent harus memiliki kemampuan
berkomunikasi dengan user dan juga agent lain. Masalah komunikasi dengan user adalah
masuk ke masalah user interface dan perangkatnya, sedangkan masalah komunikasi,
koordinasi, dan kolaborasi dengan agent lain adalah masalah sentral penelitian Multi Agent
System (MAS). Bagaimanapun juga, untuk bisa berkoordinasi dengan agent lain dalam
menjalankan tugas, perlu bahasa standard untuk berkomunikasi. Tim Finin [Finin et al.,
1993] [Finin et al., 1994] [Finin et al., 1995] [Finin et al., 1997] dan Yannis Labrou [Labrou
et al., 1994] [Labrou et al., 1997] adalah peneliti software agent yang banyak berkecimpung
dalam riset mengenai bahasa dan protokol komunikasi antar agent. Salah satu produk
mereka adalah Knowledge Query and Manipulation Language (KQML). Dan masih terkait
dengan komunikasi antar agent adalah Knowledge Interchange Format (KIF).
Software Agent bisa diklasifikasikan sebagai :
1. Desktop Agent
Yaitu agent yang hidup dan bertugas dalam lingkungan Personal Computer (PC), dan berjalan
diatas suatu Operating System (OS). Yang termasuk dalam klasifikasi ini adalah:
Operating System Agent
Application Agent
Application Suite Agent
2. Internet Agent
Yaitu agent yang hidup dan bertugas dalam lingkungan jaringan Internet, melakukan tugasnya
yaitu memanage informasi yang ada di Internet. Yang termasuk dalam klasifikasi ini adalah :
Web Search Agent
Web Server Agent
Information Filtering Agent
Information Retrieval Agent
Notification Agent
Service Agent
Mobile Agent
referensi :
Referensi :
http://iwan.staff.gunadarma.ac.id/Downloads/files/22154/3_Proses.pdf
Aries S Prayoga : Threading Part 2 (http://ariesprayoga.wordpress.com/2014/03/25/thread-part-2)
Fadhlanullah Sidiq : Client - Server (http://fadhlansymphony.blogspot.com.tr/2014/03/client-server.html)
Yanizar Dwi R : Agent Part 2 (http://teknophobia.blogspot.com/2014/03/sistem-teristribusi-proses-agent-part-2.html)
memungkinkan user untuk mendelegasikan tugasnya secara mandiri, selanjutnya software agent
nantinya disebut agent saja. Agen bisa memiliki ide sendiri mengenai bagaimana menyelesaikan
suatu pekerjaan tertentu atau agenda tersendiri. Agen yang tidak berpindah ke host lain disebut
stationary agent.
Definisi agen yang lebih rinci, ditinjau dari sudut pandang sistem, adalah obyek perangkat lunak
yang:
1. Diletakan dalam lingkungan eksekusi
2. Memiliki sifat sebagai berikut :
- a. Reaktif, dapat merasakan perubahan dalam lingkungannya dan bertindak
sesuai perubahan tersebut.
- b. Autonomous, mampu mengendalikan tindakannya sendiri
- c. Proaktif, mempunyai dorongan untuk mencapai tujuan
- d. Bekerja terus menerus sampai waktu tertentu
- a. Komunikatif, dapat berkomunikasi dengan agen yang lain.
- b. Mobile , dapat berpindah dari satu host ke host yang lain
- c. Learning, mampu menyesuaikan diri berdasarkan pengalaman sebelumnya
- d. Dapat dipercaya sehingga menimbulkan kepercayaan kepada end user.
1. Autonomy: Agent dapat melakukan tugas secara mandiri dan tidak dipengaruhi secara
langsung oleh user, agent lain ataupun oleh lingkungan (environment). Untuk mencapai
tujuan dalam melakukan tugasnya secara mandiri, agent harus memiliki kemampuan kontrol
terhadap setiap aksi yang mereka perbuat, baik aksi keluar maupun kedalam [Woolridge et.
al., 1995]. Dan satu hal yang penting yaitu mendukung autonomy dan masalah intelegensi
(intelligence) dari agent.
2. Intelligence, Reasoning, dan Learning: Setiap agent harus mempunyai standar minimum
untuk bisa disebut agent, yaitu intelegensi (intelligence). Dalam konsep intelligence, ada tiga
komponen yang harus dimiliki: internal knowledge base, kemampuan reasoning berdasar
pada knowledge base yang dimiliki, dan kemampuan learning untuk beradaptasi dalam
perubahan lingkungan.
3. Mobility dan Stationary: Khusus untuk mobile agent, dia harus memiliki kemampuan yang
merupakan karakteristik tertinggi yang dia miliki yaitu mobilitas. Berbeda dengan stationary
agent. Tetapi keduanya tetap harus memiliki kemampuan untuk mengirim pesan dan
berkomunikasi dengan agent lain.
4. Delegation: Sesuai dengan namanya dan seperti yang sudah kita bahas pada bagian definisi,
agent bergerak dalam kerangka menjalankan tugas yang diperintahkan oleh user. Fenomena
pendelegasian (delegation) ini adalah karakteristik utama suatu program disebut agent.
5. Reactivity: Karakteristik agent yang lain adalah kemampuan untuk bisa cepat beradaptasi
dengan adanya perubahan informasi yang ada dalam suatu lingkungan (enviornment).
Lingkungan itu bisa mencakup: agent lain, user, informasi dari luar, dsb [Brenner et. al.,
1998].
6. Proactivity dan Goal-Oriented: Sifat proactivity boleh dibilang adalah kelanjutan dari sifat
reactivity. Agent tidak hanya dituntut bisa beradaptasi terhadap perubahan lingkungan,
tetapi juga harus mengambil inisiatif langkah penyelesaian apa yang harus diambil [Brenner
et. al., 1998]. Untuk itu agent harus didesain memiliki tujuan (goal) yang jelas, dan selalu
berorientasi kepada tujuan yang diembannya (goal-oriented).
7. Communication and Coordination Capability: Agent harus memiliki kemampuan
berkomunikasi dengan user dan juga agent lain. Masalah komunikasi dengan user adalah
masuk ke masalah user interface dan perangkatnya, sedangkan masalah komunikasi,
koordinasi, dan kolaborasi dengan agent lain adalah masalah sentral penelitian Multi Agent
System (MAS). Bagaimanapun juga, untuk bisa berkoordinasi dengan agent lain dalam
menjalankan tugas, perlu bahasa standard untuk berkomunikasi. Tim Finin [Finin et al.,
1993] [Finin et al., 1994] [Finin et al., 1995] [Finin et al., 1997] dan Yannis Labrou [Labrou
et al., 1994] [Labrou et al., 1997] adalah peneliti software agent yang banyak berkecimpung
dalam riset mengenai bahasa dan protokol komunikasi antar agent. Salah satu produk
mereka adalah Knowledge Query and Manipulation Language (KQML). Dan masih terkait
dengan komunikasi antar agent adalah Knowledge Interchange Format (KIF).
Software Agent bisa diklasifikasikan sebagai :
1. Desktop Agent
Yaitu agent yang hidup dan bertugas dalam lingkungan Personal Computer (PC), dan berjalan
diatas suatu Operating System (OS). Yang termasuk dalam klasifikasi ini adalah:
Operating System Agent
Application Agent
Application Suite Agent
2. Internet Agent
Yaitu agent yang hidup dan bertugas dalam lingkungan jaringan Internet, melakukan tugasnya
yaitu memanage informasi yang ada di Internet. Yang termasuk dalam klasifikasi ini adalah :
Web Search Agent
Web Server Agent
Information Filtering Agent
Information Retrieval Agent
Notification Agent
Service Agent
Mobile Agent
referensi :
Referensi :
http://iwan.staff.gunadarma.ac.id/Downloads/files/22154/3_Proses.pdf
Link Kelompok :
Aprilina Putri : Threading Part 1(http://aprilinaputri19.wordpress.com/2014/03/25/thread-part-1/)Aries S Prayoga : Threading Part 2 (http://ariesprayoga.wordpress.com/2014/03/25/thread-part-2)
Fadhlanullah Sidiq : Client - Server (http://fadhlansymphony.blogspot.com.tr/2014/03/client-server.html)
Yanizar Dwi R : Agent Part 2 (http://teknophobia.blogspot.com/2014/03/sistem-teristribusi-proses-agent-part-2.html)
Impact of Mobile Computing
Type Mobile Computing
• Laptops are portable computers , small and can be carried anywhere very easily integrated in a casing . Weight laptops range from 1 to 6 pounds depending on size , materials and specifications . The power source comes from batteries or A / C adapter which can be used to recharge the battery and to power the laptop itself . Usefulness same laptop with a desktop computer , which distinguishes only the size making it easier for users to carry it around .
• Wearable Computer or computer that is applied in the human body . An example is Computer Glacier Ridgeline W200 . W200 is made from reinforced magnesium alloy which maximizes strength and minimizes overall weight . At only 10.2 ounces and was formed in the arm contour , W200 combines the same features of a standard computer with a device that provides comfort and ergonomic wrist worn instrument . The W200 has a 3.5 “color display with touch screen , backlit keyboard and a hot swappable battery . Wireless function of W200 ensure continuous connectivity regardless of the user’s location with plug and play Wi – Fi , Bluetooth and GPS modules . Using Windows CE or Linux operating systems , the unit can be quickly configured to access the remote host system through integrated wired or wireless interfaces . Hands – free operation of the W200 that overcomes the physical limitations associated with normal hand-held computer . This allows the user complete freedom to continue their daily activities with both hands while using the computer has full access at all times . In addition to the electronic compass , the system also integrates the latest and most innovative features , such as tilt and silent reckoning , which allows critical battery savings when the unit is not in use . Hands – free usability of the W200 makes it of special interest for Emergency Services , Security , Defense , Warehouse , Field Logistics and any area where access to a large amount of information required . W200 ridge line of the glacier when it joins rugged computers developed for data collection .
• PDAs ( Personal Digital Assistants ) is an electronic device and a computer -based small form and can be taken anywhere . According to my knowledge PDAs are widely used as a personal organizer at first , but because of its development , then multiply its utility function , such as a calculator , clock and timing pointer , computer games , internet users , receiving and sending electronic mail ( e – mail ) , radio receiver , video recorder , and a memo recorder . Apart from it with a PDA ( pocket computer ) , we can use the address book and store addresses , e-book reading , using GPS and many other functions . Even more sophisticated version of the PDA can be used as a mobile phone , Internet access , intranets , or extranets via Wi – Fi or Wireless Network . One of the typical PDA is the ultimate touch screen facility
• SmartPhone is a mobile phone offering advanced capabilities , its ability to resemble capabilities virtually the PC ( computer ) . Generally, a mobile phone as a smartphone when it is said to be running on the operating system software that is complete and has a standard interface and platform for application developers . While some say that a smartphone is a simple mobile phone with advanced features such as the ability to send and receive emails , surf the Internet and read e -books , built -in full keyboard or external USB keyboard , or has a VGA connector . In other words , the smartphone is a miniature computer with phone capabilities .
4 . Tool for Mobile Computing
- GPS ( Global Positioning System )
- Wireless ( Acess )
- GIS ( Location )
Excess and deficiency
Advantages of Mobile Computing
- Application wide
- Moving / berpidah freely locations
- Non- switch networks
Disadvantages of Mobile Computing
lack of Bandwidth
Internet access in peralatanini slow when compared to wired access , but with the use of technology GPRS , EDGE and 3G networks , high -speed Wireless LAN is not too expensive but has a limited bandwidth .
power consumption
Mobile computing is highly dependent on battery life .
Transmission disorders
Distance to the transmitter signal and weather affect transimis data on mobile computing .
Potential Occurrence of Accidents
Some accidents are often caused by akhir2 motorists who use mobile computing devices while driving .
Usefulness And Other Issues
As the use of mobile Internet using mobile internet sites , the ability of the site to be able to use (usability ) is important to attract and retain the attention of ” user stickiness ” ( the degree to which the user remain our site ) . There are three dimensions of usability , ie effectiveness , efficiency and satisfaction . However, users often find today’s mobile devices are not effective , especially because of restrictions pocket-sized keyboard and services , thus reducing its usefulness . Moreover, due to the limited storage capacity and speed of access to information than most smartphones and PDAs , as difficult or impossible to download large files from the per Alatan this kind . Technical limitations and other restrictions that slow the spread of m – commerce .
Failure In Mobile Computing and M – Commerce
Same with other technologies , especially new ones , there are many failures of the application and of the whole company in the mobile computing and m – commerce . It is important to anticipate and plan for the possibility of failure and learning from failure . Case Northaest Utilities beberikan some important insights .
The existence Impact of Modern Computing
The impact of modern computing is that it can help people to solve complex problems using computers . One example is a biometric . Biometric derived from the Bio and Metric . The word bio is taken from the ancient Greek language which means life , while Metric is also derived from the ancient Greek language , which means the size , so if concluded biometric means of life measurements .
But an outline of a biometric measurement of the statistical analysis of biological data that refers to the technology to analyze the characteristics of a body ( people) . From these explanations it is clear that describe Biometric detection and classification of physical attributes . There are many different biometric techniques , including :
• Reading of fingerprints / palm
• Hand geometry
• Reading of the retina / iris
• Voice recognition
• The dynamics of signatures .
And according to Don Tapscott (1995 ) in his book entitled ” The Digital Economy : Promise and Peril In The Age of Networked Intelligence ” illustrates how the impact of computing technology on human life . Application technology is less than perfect without the support of intelligent machines capable analytic . The presence of increasingly sophisticated computing technology has changed human lifestyle and the demands on human competence . Now human life increasingly dependent on computers . Here are the things that describe the concept of computational intelligence technologies are supported by the application .
1 . Product -driven computer system
a. Smart car ( car smart )
b . Smart card ( smart card )
c . Smart house ( smart home )
d . Smart road ( street smart )
2 . The design of the product is managed by a computer
3 . The process is driven by a computer work
4 . Computers became an effective means of communication
5 . Computer as an information center
In addition to the structural impact on human livelihoods , technology also evokes cultural processes in society diterpanya . It is a symptom that by N. Postman called technopoly , which is described by him as follows :
” Technopoly is a state of culture . It is also a state of mind . It Consist in the deification of technology , the which means that the culture seeks it’s satisfactions in technology , and takes it’s orders from technology “
Thus , what matters is the extent to which a society ready to enter an age characterized by the supremacy of the power plant technology as a new culture without cause risk resilience own culture . Thus , it is not wrong to state that also technological dominance will continue with the blossoming of a new culture that gave birth to various new value also tends to be the benchmark of modern human behavior in a variety of patterns of interaction with others .
Trends in Mobile Computing
Mobile devices have a radical impact on the different routines of individuals from the modern era . The introduction of the phone itself to change the communication pattern in previous years and the technology continues to evolve , now leave marks on other parts as well . Mobile computing not only provides the basic functions of communication , but to help other users in performing everyday tasks such as arranging tasks , social sharing , taking pictures and other computing tasks .
Data capability , with the introduction of wireless networks , mobile devices are also provided in advanced . Mobile technology also adds a new variation and improvisation in order to improve the overall user experience of mobile . Some of the new trends that have been introduced in mobile computing , in recent years are :
Smart -phone Computing: The third-party application development for various smartphone platforms such as iOS , Android , Windows mobile , etc. have improved graphics innovation and functionality in this application . Different concepts such as BYOD and mobility companies have introduced the use of smart-phone applications as enterprise application to different domains of industry .
Security on mobile phones : With mobile phones becoming smarter every day , the data handling capabilities become an integral part of mobile computing . The device is also connected to the network at any time , so the need to secure data stored also appeared . Because the mobile security becomes an important component of mobile computing , because the purpose of the communication device has been developed from voice to data.
Wireless Networking : different network technologies such as 4G and WiMAX are also introduced recently which raised computational tasks of data from devices and provide high speed accessibility of data from these devices . This is helpful for users who require large amounts of data transfer from their handheld devices .
M – commerce : Online trading activities has become a common activity for the user , for the shopping experience a laxative . With the growing practice of mobile computing , users can now perform the same tasks using their hand-held mobile phone or tablet device that is . Different security parameters are taken care of in the event, which involves the processing of financial information . Unlike the mobile payment application has also been introduced to cement a strong foundation of m – commerce activity .
Another location -based mobile application service which uses a special system called the Global Positioning System , or GPS , which was introduced which allows users to gain access to the information from the various locations of their device .
Different maps like Google maps facility allows users to get turn-by – turn navigation from their source of travel to their destination address . Different camera apps also introduced the use of GPSs with Geo-tagging feature so that users can customize the map view them according to their needs .
Sumber:
http://ku2harlis.wordpress.com/komputasi-modern/http://sumbait.blogspot.com/2013/03/komputasi-bergerak-nirkabel-dan-perpasif.htmlhttp://ariwiyanto83.blogspot.com/http://arissetiawan-balangan.blogspot.com/2012/10/artikel-artikel-komputer-masyarakat.htmlhttp://berserkerdark.blogspot.com/
boser45.blogspot.com/2012/12/trends-in-mobile-computing.html
• Laptops are portable computers , small and can be carried anywhere very easily integrated in a casing . Weight laptops range from 1 to 6 pounds depending on size , materials and specifications . The power source comes from batteries or A / C adapter which can be used to recharge the battery and to power the laptop itself . Usefulness same laptop with a desktop computer , which distinguishes only the size making it easier for users to carry it around .
• Wearable Computer or computer that is applied in the human body . An example is Computer Glacier Ridgeline W200 . W200 is made from reinforced magnesium alloy which maximizes strength and minimizes overall weight . At only 10.2 ounces and was formed in the arm contour , W200 combines the same features of a standard computer with a device that provides comfort and ergonomic wrist worn instrument . The W200 has a 3.5 “color display with touch screen , backlit keyboard and a hot swappable battery . Wireless function of W200 ensure continuous connectivity regardless of the user’s location with plug and play Wi – Fi , Bluetooth and GPS modules . Using Windows CE or Linux operating systems , the unit can be quickly configured to access the remote host system through integrated wired or wireless interfaces . Hands – free operation of the W200 that overcomes the physical limitations associated with normal hand-held computer . This allows the user complete freedom to continue their daily activities with both hands while using the computer has full access at all times . In addition to the electronic compass , the system also integrates the latest and most innovative features , such as tilt and silent reckoning , which allows critical battery savings when the unit is not in use . Hands – free usability of the W200 makes it of special interest for Emergency Services , Security , Defense , Warehouse , Field Logistics and any area where access to a large amount of information required . W200 ridge line of the glacier when it joins rugged computers developed for data collection .
• PDAs ( Personal Digital Assistants ) is an electronic device and a computer -based small form and can be taken anywhere . According to my knowledge PDAs are widely used as a personal organizer at first , but because of its development , then multiply its utility function , such as a calculator , clock and timing pointer , computer games , internet users , receiving and sending electronic mail ( e – mail ) , radio receiver , video recorder , and a memo recorder . Apart from it with a PDA ( pocket computer ) , we can use the address book and store addresses , e-book reading , using GPS and many other functions . Even more sophisticated version of the PDA can be used as a mobile phone , Internet access , intranets , or extranets via Wi – Fi or Wireless Network . One of the typical PDA is the ultimate touch screen facility
• SmartPhone is a mobile phone offering advanced capabilities , its ability to resemble capabilities virtually the PC ( computer ) . Generally, a mobile phone as a smartphone when it is said to be running on the operating system software that is complete and has a standard interface and platform for application developers . While some say that a smartphone is a simple mobile phone with advanced features such as the ability to send and receive emails , surf the Internet and read e -books , built -in full keyboard or external USB keyboard , or has a VGA connector . In other words , the smartphone is a miniature computer with phone capabilities .
4 . Tool for Mobile Computing
- GPS ( Global Positioning System )
- Wireless ( Acess )
- GIS ( Location )
Excess and deficiency
Advantages of Mobile Computing
- Application wide
- Moving / berpidah freely locations
- Non- switch networks
Disadvantages of Mobile Computing
lack of Bandwidth
Internet access in peralatanini slow when compared to wired access , but with the use of technology GPRS , EDGE and 3G networks , high -speed Wireless LAN is not too expensive but has a limited bandwidth .
power consumption
Mobile computing is highly dependent on battery life .
Transmission disorders
Distance to the transmitter signal and weather affect transimis data on mobile computing .
Potential Occurrence of Accidents
Some accidents are often caused by akhir2 motorists who use mobile computing devices while driving .
Usefulness And Other Issues
As the use of mobile Internet using mobile internet sites , the ability of the site to be able to use (usability ) is important to attract and retain the attention of ” user stickiness ” ( the degree to which the user remain our site ) . There are three dimensions of usability , ie effectiveness , efficiency and satisfaction . However, users often find today’s mobile devices are not effective , especially because of restrictions pocket-sized keyboard and services , thus reducing its usefulness . Moreover, due to the limited storage capacity and speed of access to information than most smartphones and PDAs , as difficult or impossible to download large files from the per Alatan this kind . Technical limitations and other restrictions that slow the spread of m – commerce .
Failure In Mobile Computing and M – Commerce
Same with other technologies , especially new ones , there are many failures of the application and of the whole company in the mobile computing and m – commerce . It is important to anticipate and plan for the possibility of failure and learning from failure . Case Northaest Utilities beberikan some important insights .
The existence Impact of Modern Computing
The impact of modern computing is that it can help people to solve complex problems using computers . One example is a biometric . Biometric derived from the Bio and Metric . The word bio is taken from the ancient Greek language which means life , while Metric is also derived from the ancient Greek language , which means the size , so if concluded biometric means of life measurements .
But an outline of a biometric measurement of the statistical analysis of biological data that refers to the technology to analyze the characteristics of a body ( people) . From these explanations it is clear that describe Biometric detection and classification of physical attributes . There are many different biometric techniques , including :
• Reading of fingerprints / palm
• Hand geometry
• Reading of the retina / iris
• Voice recognition
• The dynamics of signatures .
And according to Don Tapscott (1995 ) in his book entitled ” The Digital Economy : Promise and Peril In The Age of Networked Intelligence ” illustrates how the impact of computing technology on human life . Application technology is less than perfect without the support of intelligent machines capable analytic . The presence of increasingly sophisticated computing technology has changed human lifestyle and the demands on human competence . Now human life increasingly dependent on computers . Here are the things that describe the concept of computational intelligence technologies are supported by the application .
1 . Product -driven computer system
a. Smart car ( car smart )
b . Smart card ( smart card )
c . Smart house ( smart home )
d . Smart road ( street smart )
2 . The design of the product is managed by a computer
3 . The process is driven by a computer work
4 . Computers became an effective means of communication
5 . Computer as an information center
In addition to the structural impact on human livelihoods , technology also evokes cultural processes in society diterpanya . It is a symptom that by N. Postman called technopoly , which is described by him as follows :
” Technopoly is a state of culture . It is also a state of mind . It Consist in the deification of technology , the which means that the culture seeks it’s satisfactions in technology , and takes it’s orders from technology “
Thus , what matters is the extent to which a society ready to enter an age characterized by the supremacy of the power plant technology as a new culture without cause risk resilience own culture . Thus , it is not wrong to state that also technological dominance will continue with the blossoming of a new culture that gave birth to various new value also tends to be the benchmark of modern human behavior in a variety of patterns of interaction with others .
Trends in Mobile Computing
Mobile devices have a radical impact on the different routines of individuals from the modern era . The introduction of the phone itself to change the communication pattern in previous years and the technology continues to evolve , now leave marks on other parts as well . Mobile computing not only provides the basic functions of communication , but to help other users in performing everyday tasks such as arranging tasks , social sharing , taking pictures and other computing tasks .
Data capability , with the introduction of wireless networks , mobile devices are also provided in advanced . Mobile technology also adds a new variation and improvisation in order to improve the overall user experience of mobile . Some of the new trends that have been introduced in mobile computing , in recent years are :
Smart -phone Computing: The third-party application development for various smartphone platforms such as iOS , Android , Windows mobile , etc. have improved graphics innovation and functionality in this application . Different concepts such as BYOD and mobility companies have introduced the use of smart-phone applications as enterprise application to different domains of industry .
Security on mobile phones : With mobile phones becoming smarter every day , the data handling capabilities become an integral part of mobile computing . The device is also connected to the network at any time , so the need to secure data stored also appeared . Because the mobile security becomes an important component of mobile computing , because the purpose of the communication device has been developed from voice to data.
Wireless Networking : different network technologies such as 4G and WiMAX are also introduced recently which raised computational tasks of data from devices and provide high speed accessibility of data from these devices . This is helpful for users who require large amounts of data transfer from their handheld devices .
M – commerce : Online trading activities has become a common activity for the user , for the shopping experience a laxative . With the growing practice of mobile computing , users can now perform the same tasks using their hand-held mobile phone or tablet device that is . Different security parameters are taken care of in the event, which involves the processing of financial information . Unlike the mobile payment application has also been introduced to cement a strong foundation of m – commerce activity .
Another location -based mobile application service which uses a special system called the Global Positioning System , or GPS , which was introduced which allows users to gain access to the information from the various locations of their device .
Different maps like Google maps facility allows users to get turn-by – turn navigation from their source of travel to their destination address . Different camera apps also introduced the use of GPSs with Geo-tagging feature so that users can customize the map view them according to their needs .
Sumber:
http://ku2harlis.wordpress.com/komputasi-modern/http://sumbait.blogspot.com/2013/03/komputasi-bergerak-nirkabel-dan-perpasif.htmlhttp://ariwiyanto83.blogspot.com/http://arissetiawan-balangan.blogspot.com/2012/10/artikel-artikel-komputer-masyarakat.htmlhttp://berserkerdark.blogspot.com/
boser45.blogspot.com/2012/12/trends-in-mobile-computing.html
Senin, 17 Maret 2014
Sistem Terdistribusi : Secure Shell (Studi Kasus)
Studi Kasus RPC
RPC (Remote Procedure Call) pertama kali ditemukan pada
tahun 1976. Perusahaan yang pertama kali menggunakan RPC adalah Xerox
pada tahun 1981. RPC di implementasikan pertama kali di sistem operasi
Unix, Sun's RPC (sekarang disebut ONC RPC). ONC RPC masih banyak
digunakan saat ini pada beberapa platform. Implementasi Unix yang lain
digunakan oleh Apollo Computer Network Computing System (NCS). NCS
kemudian digunakan sebagai dasar fondasi DCE/RPC di OSF Distributed
Computing Environment (DCE). Satu dekade kemudian diadopsi oleh
perusahaan Microsoft DCE/RP, Microsoft RPC (MSRPC) sebagai dasar
mekanisme mereka, dan berjalan pada DCOM (Distributed Object Component
Model). Sekitar waktu yang sama pertengahan tahun 90-an, Xerox PARC's
ILU, dan Object Management Group CORBA, menawarkan paradigma RPC yang
lain berdasarkan objek terdistribusi dengan mekanisme yang menggunakan
metode warisan.
Remote Procedure Call (RPC) adalah sebuah metode yang memungkinkan kita
untuk mengakses sebuah prosedur yang berada di komputer lain. Untuk
dapat melakukan ini sebuah server harus menyediakan layanan remote
procedure. RPC mengasumsikan keberadaan dari low-level protokol transportasi
seperti TCP atau UDP untuk membawa pesan data dalam komunikasi suatu
program. Protokol RPC dibangun diatas protokol eXternal Data
Representation (XDR), yang merupakan standar dari representasi data
dalam komunikasi remote. Protokol XDR mengubah parameter dan hasil dari
tiap servis RPC yang disediakan.
Definisi lain dari
RPC, adalah sebuah metode yang memungkinkan kita untuk mengakses sebuah
procedure yang berada di komputer lain. Nah, untuk melakukan hal ini sebuah
server harus menyediakan layanan remote procedure.
Cara kerja dari RPC ini
adalah server membuka socket, lalu menunggu client yang meminta prosedur yang
disediakan oleh server. Bila client tidak tahu harus menghubungi port yang
mana, client bisa me-request kepada sebuah matchmaker pada sebuah RPC port yang
tetap. Matchmaker akan memberikan port apa yang digunakan oleh prosedur yang
diminta client.
Cara kerja dari RPC ini
adalah server membuka socket, lalu menunggu client yang meminta prosedur yang
disediakan oleh server. Bila client tidak tahu harus menghubungi port yang
mana, client bisa me-request kepada sebuah matchmaker pada sebuah RPC port yang
tetap. Matchmaker akan memberikan port apa yang digunakan oleh prosedur yang
diminta client.
Langkah-Langkah RPC
Tahapan dari gambar diatas:
- Klien memanggil prosedur stub lokal. Prosedur Stub akan memberikan parameter dalam suatu paket yang akan dikirim ke jaringan. Proses ini disebut sebagai marshalling.
- Fungsi Network pada O/S (Operating system – Sistem Operasi) akan dipanggil oleh stub untuk mengirim suatu message.
- Kemudian Kernel ini akan mengirim message ke sistem remote. Kondisi ini dapat berupa connectionless atau connection-oriented.
- Stub pada sisi server akan melakukan proses unmarshals pada paket yang dikirim pada network.
- Stub pada server kemudian mengeksekusi prosedur panggilan lokal.
- Jika eksekusi prosedur ini telah selesai, maka eksekusi diberikan kembali ke stub pada server.
- Stub server akan melakukan proses marshals lagi dan mengirimkan message nilai balikan ( hasilnya ) kembali ke jaringan.
- Message ini akan dikirim kembali ke klien.
- Stub klien akan membaca message ini dengan menggunakan fungsi pada jaringan.
- Proses unmarshalled kemudian dilakukan pada message ini dan nilai balikan aka diambil untuk kemudian diproses pada proses lokal.
Kelebihan RPC
- Relatif mudah digunakan
Pemanggilan
remote procedure tidak jauh berbeda dibandingkan pemanggilan local procedure.
Sehingga pemrogram dapat berkonsentrasi pada software logic, tidak perlu
memikirkan low level details seperti soket, marshalling & unmarshalling.
- Robust (Sempurna)
Sejak th
1980-an RPC telah banyak digunakan dlm pengembangan mission-critical
application yg memerlukan scalability, fault tolerance, reliability.
Kekurangan RPC
- Tidak fleksibel terhadap perubahan
- Static relationship between client & server at run-time.
- Berdasarkan prosedural/structured programming yang sudah ketinggalan jaman dibandingkan OOP.
- Kurangnya location transparency. Misalnya pemrogram hanya boleh melakukan pass by value, bukan pass by reference.
- Komunikasi hanya antara 1 klien & 1 server (one-to-one at a time). Komunikasi antara 1 klien & beberapa server memerlukan beberapa koneksi yg terpisah
Studi Kasus RPC
Salah
satu contoh penggunaan RPC yaitu SSH. Secure
Shell atau SSH adalah protokol jaringan yang memungkinkan pertukaran data melalui saluran aman antara dua perangkat jaringan. Terutama banyak digunakan pada sistem berbasis Linux dan Unix untuk mengakses akun shell. Kegunaan
utama SSH adalah untuk memasuki sistem komputer di tempat lain yang terhubung
melalui jaringan dengan cara yang aman.
Contoh aplikasi yang digunakan untuk SSH yaitu
Putty dan Winscp. Cara penggunaannya yaitu dengan memastikan
terlebih dahulu port ssh di komputer tujuan harus aktif dengan cara diaktifkan
servicenya melalui terminal .
A. Putty
Langkah awal menggunakan Putty
Langkah kedua
untuk mengisi login as harus benar passwordnya juga harus tepat sesuai yang terdaftar pada komputer yang di remote.
untuk mengisi login as harus benar passwordnya juga harus tepat sesuai yang terdaftar pada komputer yang di remote.
Langkah ketiga
Anda berhasil masuk hal yang terpenting disini jika kita
ingin akses full administrator komputer linux yang di remote tadi harus masuk
sebagai root.
B.Winscp
Langkah pertama masukan ip tujuan komp linux yang akan di remote lalu klik login:
Langkah pertama masukan ip tujuan komp linux yang akan di remote lalu klik login:
Setelah login isi username dan password user tujuan komp yang
telah terdaftar di sistem linux tersebut:
Tampilan menu pembuka winscp sebelum digunakan.
Dengan winscp anda dapat mendelete rename edit copy file atau
folder semaunya di winscp tanpa harus menggunakan editor vi,perintah - perintah
console yang membingunkan dengan sarat anda remote harus sebagai root di komp
linux yang diremote.
Contoh kasus RCP lainnya yang sering
kita temui yakni terdapat pada jasa pengeprintan di rental pengetikan yang di
dalamnya terdapat 1 komputer server, beberapa komputer client dan sebuah
printer yang hanya terhubung dengan server. User dari computer client ingin
mencetak data dari komputernya. Biasanya user memindah data dengan bantuan
device external seperti disket, flash disk, hard disk, atau cd-rw. Namun dengan
RPC hal tersebut akan menjadi lebih efisien.
Solusinya adalah :
Dengan RPC, untuk mencetak data dari computer client,
computer client mengirim pesan “cetak” kepada computer server. Kemudian computer
server menerima perintah tersebut dan kemudian menjalankan perintah mencetak
data. Setelah itu server mengirimkan pesan pada client berupa informasi “file
telah dicetak”.
Referensi :
http://kweedhbuzz.blogspot.com/2013/03/studi-kasus-rpc.html
http://jevrie-brothers.blogspot.com/2013/03/remote-procedure-call-gagasan-tentang.html
http://hadisaputra3.blogspot.com/2013/03/studi-kasus-rpc.html
Link Kelompok :
1. Aprilina Putri : Protokol ( Sistem Terdistribusi Komunikasi Studi) http://aprilinaputri19.wordpress.com/2014/03/17/protokol-sistem-terdistribusi-komunikasi-studi/
2. Ariens S. Prayoga : RPC ( Remote Procedure Controller)
3. Fadhlanullah Sidiq : Konsep Objek Terdistribusi dan Objek Interface ( http://fadhlansymphony.blogspot.com/2014/03/konsep-objek-terdistribusi-dan-objek.html )
4. Yanizar Dwi : Sistem Terdistribusi : Komunikasi (Studi Kasus) http://teknophobia.blogspot.com/2014/03/sistem-terdistribusi-komunikasi-studi.html
Langganan:
Postingan (Atom)