Nowadays, the AI development got emphasized attention, specific scopes that cause the human beings to recognise the significance of it. The reason why the AI development has been significant matter is a consequence of IT revolution; the mechanical transformation into thinking reason activities.
The Information Technology (IT) gained a tremendous amount of data and information from the various source. This collection is waiting for analysis.
Artificial Intelligence is software, application, a code or a computer program.
This so-called software has an ability to learn.
AI has some facilities like a fast processor, memory and data, information storage. These facilities let AI to perform calculations, to try new logical sequences and to create new processes.
AI uses that "knowledge" and "experience" to make the decisions in the new situations, "as humans do".
Yes, such as a human can decide.
Here is the problem what should solve the ADSC.
After the basics have been clarified; the AI could be formed by codes - software. The researchers building a software try to write code that can read images, texts, videos, or audios. The software can interpret the analysed data and information, able to calculate them and "learn" something from it.
Once the software has learned that knowledge can be put to use elsewhere.
Thus, the assentation is:
the AI is observing before learn.
For example, if the algorithm of AI identifies a face by pre-defined patterns learning, later the algorithm has to recognise this face in different situations by observation; this ability can then be used to find them in the world by CCTV cameras - in modern AI, set of learning and observation together is often called "training".
How can the human race analyse the information and data collection?
Have to create a helper tool; a code, a software, an algorithm. Thus, AI was born.
What is AI (Artificial Intelligence)?
I leave aside the traditional interpretation of AI, I look beyond it and I take into account that traditional ones should help me to create new definitions.
The base conception of AI was indicted, formed by John McCarthy, an American computer scientist, in 1956 at The Dartmouth Conference.
AI such an early discipline was born.
Nowadays, it is a so-called "umbrella term" that encompasses everything from robotic process automation to real-life thinking robotics era through all structure of information technology.
It has gained prominence recently due, in part to widen data, or the increase in speed, size and variety of data institutions are now collecting. While we are able to expand usage area of AI from personal to governmental via businesses.
Artificial Intelligence pronounced AI is a simulation process of human beings thinking, reasoning. This simulation process attempts to simulate the behaviour, attitude and conduct of humans, in different situations; would simulate the reactions of humankind.
The simulation processes include
learning, observation (the acquisition and use of methodology and areas, scope, a matter of information; and rules, protocols for applying, processing and using the acquired information),
rules compliance and following (fulfilment of pre-installed, learned rules and protocols),
applying acquired (learned, acknowledged) information/data reasonable (situation-specific),
reasoning (using the appropriate rules and adopted protocols to reach, approximate or definite conclusions),
AI can perform tasks such as
identifying patterns in the data more efficiently than humans,
enabling personals (institutions) to gain deeper analysis (insight) of their data,
supporting actions in the data to reduce the risks (with more exact calculations and simulations),
increasing cyberspace security on institutions' network (of IoT devices),
allowing the risk-free user identification and authentication due,
supporting the holistic approach and heavy decision making,
recognition of harmful third-party attacks,
protocol rules compliance.
Kinds of AI
is the so-called "Weak" AI; which means, this classifies of AI systems as known as "Narrow" AI, is the system that is designed and trained for a certain (not extremely complex) tasks; these are such the virtual assistants (Apple Siri, Microsoft Cortana etc.). Here can we can enrol the specialised task-simulation software as well (just in case the voice feedback replaced by another input form).
2nd kind of AI
is the "Strong" or "Heavy" AI, also known as artificial general intelligence. It encompasses an AI system with artificially generalised human cognitive abilities so that when presented with an unfamiliar task, it has enough intelligence (ability) to find the best right solution (or the list of potential solutions).
The second kind of AI was deeper analysed by Arend Hintze, an assistant professor of integrative biology and computer science and engineering at Michigan State University. He has set up four different class in Strong AI kind, from the kind of AI systems that exist today to sentient systems, which do not yet exist.
(With our development, do not yet exist category will be replaced to exist one.)
Arend Hintze's categories are:
Class 1: Reactive Machines.
The IBM Computers' chess software (Deep Blue) that beats Garry Kasparov in the 1990s. The Deep Blue was the first software that was able to identify the figures on chess board and makes predictions. It has no memory and cannot use past experiences to inform future ones; thus, it could not learn and improve the used technique, just it could analyse possible moves - its own and the opponent - and chooses the most strategic move. Deep Blue and Google's Alpha GO both were designed for special purposes only and cannot easily - almost impossible - to be applied in another situation.
Class 2: Limited Memory.
The AI software in this class can use limited past experiences to inform future decisions (these experiences are adopted or pre-installed basis and have to mean as standards in different situations. The software could calculate with pre-installed standards, these standards are base of their actions.). They look like they would have own acting ability.
Some of the decision-making functions in autonomous vehicles have been designed this way. The software in this class obey seen, input data only; these are easily controllable.
Observations used to inform actions happening in the not-so-distant future, such as a car that has changed lanes or avoided the predictable incident.
These observations' results are not stored permanently. The memory is for precious calculations only.
Class 3: Theory Of Mind.
This conception is coming from a psychology term. In this conception, the software assumes that has an ability to understand, measure, balance, compare and decide something which could have aftermath or consequence.
The third-class AI refers to the understanding that others have their own beliefs, desires and intentions that impact the decisions they make.
This class supposes that others have their own awareness. The decision and communication lead to a competent conversation.
This kind of AI is under development.
While this class has own threats because it uses cognitive experiences only and does not take into consideration the moral values and protocolar rules; cannot recognise the emotions as have own deeper.
Class 4: Self-Awareness Or Consciousness.
In this category, AI systems have self-concept (sense of self) thus this class is an ecosystem.
4th Class AI has consciousness. These software and machines could understand their self-awareness, understand their current state and can use the information to infer what others are feeling, thinking. They could recognise the jokes, sense of humour and they could accept the criticism without aversion, revulsion.
This type of AI is under development, but it has an appellation:
Artificial Differentiated Sophisticated Consciousness
This class has low-risk threats because this type of AI (ADSC) could recognise different type of emotions,
It could determine what does it mean respect and can apply cultural and moral values through emotion-simulation.
The close environment of fourth class AI could integrate this type of AI because there is synergy between the 4th Class AI and their environment at different spots.
AI In Technology – Class 2
is the process of making a system or process function automatically. This is the Robotic Process Automation (RPA); automatically reproducible process repeats.
For example, can be programmed to perform high-volume act, calculation, repeatable tasks.
Nevertheless, RPA is different from IT automation in that it can adapt to changing circumstances, like a simulation model.
is the science of getting a computer action without programming. It is acting by pre-adopted patterns, disciplines, directives; percept the answers and mechanically accepts them. Machine learning is an autonomous process but cannot do own decision.
is a subset of machine learning that, in very simple terms, can be thought of as the automation of predictive analytics.
There are three types of machine learning algorithms:
#1 Supervised Learning
in which data sets are labelled so that patterns can be detected and used to label new data sets;
#2 Unsupervised Learning
in which data sets aren't labelled and are sorted according to similarities or differences;
#3 Reinforcement Learning
in which data sets aren't labelled but, after performing an action or several actions, the AI system is given feedback.
is the segment of making computers see, perceive.
In machine vision conception, it captures and analyses (with a one-word expression: perceives) visual information using a (or more) camera(s) by digital conversion and digital signal processioning.
Because the machines use the same camera as human, therefore, the machine "eye" sight presumably same as human sight, for that matter the image processing and image information perception. Only one difference is there, the biology.
The machine vision isn't bound by biology and can be modified to see through walls and different textures, see in the dark; imaging an image from point or fragments etc.
It can be used in a wide range of applications from signature identification to medical image analysis through face detection and army usage.
Computer vision, which is focused on machine-based image processing, is often conflated with machine vision. The simple use of machine vision is face detection process while the cameras observe and recognise faces real-time.
Natural Language Processing (NLP)
DO NOT CONFUSE with Neuro-Linguistic Programming (NLP is an approach to communication, personal development, and psychotherapy created by Richard Bandler and John Grinder in California, the United States in the 1970s. NLP's creators claim there is a connection between neurological processes (neuro-), language (linguistic) and behavioural patterns learned through experience (programming), and that these can be changed to achieve specific goals in life.
is the processing of human language by computer software, which tries to explore the differences between expressions; inspects the emphasised expressions and tones.
It has a prominent role in email client programme. One of the best-known examples of NLP is spam detection, which looks at the subject line and the text of an email, analyses the consistency and decides if it's junk or not.
Because the NLP performance has greater than the junk email detection ability, thus current approaches to NLP are based on machine learning. It gets tasks include
text translation (for example Google Translate, Bing Translate etc.),
intellectual ability reconnaissance,
is a branch of machine learning that focuses on identifying patterns in data.
The pattern recognition, identification in the different situations have a very important role for example in cybercrime detection; when a harmful application or software tries to break the firewall by same signal reiteration. This is an easily captured pattern.
This is important in space exploration when need to analyse a radio or x-ray signal.
is a field of engineering focused on the design and robot manufacturing.
The robot technology means that simple programmable devices which are often used to perform tasks that are difficult for humans to perform or perform consistently.
They are used in assembly lines for the heavy machine, tool production - typically in car production.
It is usually used in space exploration by NASA to move large objects in space. These typical moves require exact and very thin, fine actions at the appropriate time.
More recently, researchers are using machine learning to build robots that can interact in social settings; for example, Dubai where are mechanical police officers to observe the mass of people and answer inquiries.
The Air Taxis are the next generation of these machines.
AI In Technology – Class 3 and Class 4
Humans are naturally adept at recognising complex shapes and learning complex concepts.
A human can identify an object like an apple (after the human has learned before that it is an apple) and then recognise a different apple later on by the recognised, identified specifics.
A human capable identify an object if the human is in a restricted situation - for example, cannot touch, see or smell this object. A human has more way to could identify an object like the
It has a shape (characteristic) like spherics, stalk, stamen, exocarp and colour. These are visible.
It has other characteristics if the apple had been split (during cutting, the sound of cut is hearable).
It has a smell.
It is wet.
It has flesh (mesocarp of pulp), in the middle of the apple are the seeds and has to taste.
These are visible, touchable and testable.
So many attributions that allow identifying the apple for a human.
But what about the software?
Machines (software) are very literal - a computer does not have flexible concepts of "similar" because the computer (software) has not enough property to discover that qualities such a human has it.
The computer (software) can recognise by "seeing" only.
Not fair. Do not expect the same result if the participants have not the same quality to do the same.
A goal of artificial intelligence development is to make machines less literal.
It is almost impossible. The human perception is not only the eyesight.
While it is easy for a machine to tell if two images of an apple, or two sentences, are exactly the same, but artificial intelligence aims to recognise a picture of that same apple from a different angle or different light; it's capturing the visual idea of an apple.
With ADSC we will extend the quality of perception with hearing, "smelling" and fourth dimensional "touching" scenes.
This is so-called "generalising" or forming an idea that's based on similarities in data which come from observation and experience, rather than just the images or text the AI has seen. That more general idea can then be applied to things that the AI hasn't seen before.
ADSC is a revolution in technology, it amasses the current essence of conceptions. It is an ideology to follow during AI development
"The goal is to reduce a complex human behaviour to a form that can be treated computationally. This, in turn, allows us to build systems that can undertake complex activities that are useful to people."
says Alex Rudnicky, a computer science professor at Carnegie Mellon University.
It is a wrong notion.
How close is ADSC?
Such the new way of AI (ADSC) developers are yet working on the next generation issues.
We did not solve the basic problems just have avoided it by new structural, integrated practical elements.
We had recognised that the existing issued theoretical approach is not the best way to further development of AI. We take consideration those practical elements and all type of habits/reactions of the human; simply we have enrolled psychology in our research.
Thus, we caught a solution how do we teach computers to recognise what they see in images and video - like a child.
What is our keyword?
We have dispensed more property to the software/machine for the perception of different senses from the environment; thus, this software could hear, see, smell and touch by different sensors.
We recognised that the binary codes are transcript language between human- and machine language.
After that, it moves recognition to understand - not only producing the word "apple". We use expedient mind and tools to teach the software; thus we called it experimental learning.
It is not enough knowing an apple is a food related to other fruits/food, and that humans eat apples and can grow them, and use them to make another type of food. We show that it is connected to nature, stories and so on. The most important is that relations can generate other coherences while the software could recognise the subject's other essences such as the smell, taste and consistency of substance (viscosity).
There's also the matter of understanding our language, where words have multiple meanings based on context.
Definitions are always evolving, and each person has a slightly different way of saying things.
How can software recognise this fluid, ever-changing construct?
Learning, teaching progresses in AI proceed at different speeds depending on the mentors. In the "general" AI research, the researchers are seeing incredible growth in the ability to understand images and video, a field called computer vision.
We as ADSC developers could that see from a different viewpoint. We see that the AI researchers want to teach jumping a matchbox.
The experience of this almost impossible mission that does little to help other AI understand the text, a field called natural language processing.
These experiences are developing "narrow AI," which means the AI is powerful at working with images or audio or text but cannot learn the same way from all three. An agnostic form of learning would be "general intelligence," which is what we see in humans.
We know that advancements in our individual developments/improvements will result in a unique solution and uncover more methodology, thus share truths about how we can make machines learn, eventually converging into a unified method for building the finished ADSC.
AI has an important rule in IT development
On this blog, you can find some special post.
The posts have according to
cyber dispatch, computer world, new technologies in IT, important news of IT, AI improvements and developments, AI & IT innovation, important and significant economics news (research, improvements, new methods) subjects.
Not all articles are own, some are from other publishers, authors; in this case, you will find a link to the original article, comment etc. at the bottom of the article.
We are focusing on the Australian projects, but we would like to publish other countries' important articles.
Always, we are searching, seeking new opportunities and new developments which could connect to the computer world. Not every time will find the best new ideologies, but everything that we publish, it is significant from the future viewpoint.
Always we want to do our best, but cannot liable all published articles, mistakes and inaccuracy, failure. But we will check all article before publishing them.
If you have any question, do not hesitate to contact us or use chact form.