Current Events > What do you think is the most complex invention ever?

Topic List
Page List: 1
UT1999
01/26/18 8:27:32 AM
#1:


Or the greatest also even if it's not the most complex
---
"Sometimes they even attack wounded foxes"
... Copied to Clipboard!
UT1999
01/26/18 9:13:22 AM
#2:


somebody comment in this awesome topic
---
"Sometimes they even attack wounded foxes"
... Copied to Clipboard!
chill02
01/26/18 9:13:41 AM
#3:


this topic
---
Ave, true to Caesar.
... Copied to Clipboard!
DoctorVader
01/26/18 9:13:49 AM
#4:


The Universe
---
It all just disappears, doesn't it? Everything you are, gone in a moment, like breath on a mirror. - The Doctor
... Copied to Clipboard!
Vindris_SNH
01/26/18 9:15:45 AM
#5:


Most complex? Probably something that had to do with a lot of inventions before it. Like a spacecraft of some sort.
---
glitteringfairy: Just build the damn wall
ThyCorndog: and how exactly will that stop the mexican space program from orbital dropping illegal immigrants?
... Copied to Clipboard!
DevsBro
01/26/18 9:17:18 AM
#6:


*waits for some goober to say smartphone*
---
... Copied to Clipboard!
Rexdragon125
01/26/18 9:19:00 AM
#7:


Computers are definitely up there. No one person intimately understands how it all works. People designing hardware or software only really understand the small part they're in charge of.
... Copied to Clipboard!
Rika_Furude
01/26/18 9:19:02 AM
#8:


anything relating to either quantum computing, quantum entanglement, or whatever the LHC is for
computers are fairly complex as well
yes, that includes smartphones @DevsBro
---
... Copied to Clipboard!
UT1999
01/26/18 9:25:10 AM
#9:


Do you think the human mind has ever created something as complex as what evolution has been able to create? the immune system, eye, etc....?
---
"Sometimes they even attack wounded foxes"
... Copied to Clipboard!
DevsBro
01/26/18 12:31:22 PM
#10:


Computers are definitely up there. No one person intimately understands how it all works. People designing hardware or software only really understand the small part they're in charge of.

Nobody knows everything about anything. But some people know a lot.

Electricity is a fundamental force of nature that acts on electric charges, pulling them together or pushing them apart. This force, when normalized for arbitrary charge and then applied over a distance is called voltage. Because of the chemical properties of various metals and semiconductors (number of valence electrons, for example), atoms of those elements placed closely together, as in solids, can be passed easily from one to another as long as a replacement is provided. For that reason, attaching a conductor between two different potentials (a voltage difference) causes the conductor's valence electrons to eqperience electric force that encourages them to flow. The electrons move from the lower potential to the higher, which is logically equivalent to a positive charge moving from higher to lower (this is why people are confused about which direction current flows).

Power consumed is in terms of energy per time. Voltge is energy per charge and current is charge per time, meaning that power is current times voltage. If there is no voltage or no current, power is zero.

Because it's all about fields, you can put a small break in the circuit, and if you out a plate on each end, the field resulting from the difference in potential will attract electrons to one terminal and "holes" (lack of electron) to the other. This is called a capacitor.

There are two kinds of silicon, N and P, which are "doped" appropriately (by filling them with electrons or "holes" (a hole is a lack of an electron) that encourage or discourage the movement of electrons. These are used to build all kinds of semiconductor devices, but the most relevant to computing is the Field-Effect Transistor, or FET. A FET places a piece of silicon between the two terminals of a capacitor. Then a voltge placed across the capacitor causes the electrons and holes to separate within the silicon, creating a narrrow band where conductivity is high and electrons can flow freely.

Combining these FETs in series and in parallel allows data encoded in high and low voltages to be aggregated with gates, or logical constructs like "if this and that, do this" or "if neither this nor that, do this." This is called combinational logic. Using two types of FETs to route to the higher voltage and to the lower voltage is a technology called CMOS that allows the output range to match the input range and cuts power consumption drastically. Since all potential voltage differences are zero where there is a closed circuit (except when switching) and all currents are zero where there is a difference in potential, steady-state power is zero.

Combinational logic can do all kinds of stuff, including addition of binary numbers and ither mathematics.

But because the movement of electrons in an FET is limited by the strength of the field, and barring that, the universal speed limit, each FET takes a short amount of time to switch, which consumes power because currents flows briefly while the conductivity of the silicon band increases.

But another effect of the switching time is that it allows what is called sequential logic, where data can be stored in a construct called a flip-flop, in an arrangement so that it reinforces itself, and more relevantly, can be passed from one to another sequentially and simultaneously. The first flip-flop can be given an input value while the next reads its output value.

Sequential logic circuits can be used to shift/transfer data and perform complex calculations.
---
... Copied to Clipboard!
DevsBro
01/26/18 12:32:04 PM
#11:


Because these sequential logic circuits can loop logically, we have what are called finite state machines, where the value stored moves around as it is clocked. The output of the finite state machine can be demultiplexed (rerouted) the control different components at different times.

If you take a finite state machine, give it a large number of flip-flops to store its data in, give it an arithmetic and logic unit that uses combinational and sequential logic to do calculations, tell it how to signal the components in response to its input memory and give it a few leads to provide output and accept input, you have a computer.

Generally, you'll have instruction memory, which contains a series of instructions for the processor to follow. This will be a set number of bits (individual binary values) in length, which the processor decodes to figure out which operation to execute, what data to execute it on, and where to store the result. Some of these instructions affect the flow of the program itself. The computer will have a program counter, a register (collection of flip-flops) that refers to the next instruction, which is generally incremented after each instruction but can also be affected by the instructions to make loops, conditional execution and subroutines.

Because faster memory is more expensive, there is usually a hierarchical organization of memory, beginning with the processor's own registers, moving down to its cache, then moving outside to the RAM and finally to flash/HDD/SSD. Today, we even have cloud-based memory beyond that. Because of this structure, it's necessary to be able to move data around from one level to another, instructions need to exist to carry that out.

Most computers will have add, subtract, load, save, branch and comparison instructions. More complex computers will have more instructions. More specialized computers might not have all of the above.

Every computer will have a maximum memory address that it can access, but with a little engineering, you can usually give it more memory than that anyway by careul manipulation of a separate output port to activate one of a number of memory chips.

Most computers will also have an interrupt capability that allows them to avoid checking for input regularly. The interrupt instead tells the computer that data is available. Generally, upon interrupt, a computer will refer to a predefined memory address that handles the event, and generally, this handler defers to another subroutine.

Software, at its basic level, is the computer's instructions. Many architectures will have an assembler, a program that allows the user to write the program in a more easily readable format and then translates the instructions into the appropriate bits before programming the memory.

Eventually, higher and higher level constructs have been introduced to make the process easier and portable between architectures. These high-level languages are compiled (translated), linked (the pieces pieces put together) and converted to the architecture's instructions.

Many of these high-level languages are modular, where many subprograms can be written and used as needed by the main program. Some of these subprograms are distributed themselves as libraries. Subprograms will often accept parameters, just like functions in mathematics. They will generally have data types, integer vs float vs character, and pointers to those types.
---
... Copied to Clipboard!
DevsBro
01/26/18 12:33:08 PM
#12:


A pointer is essentially an integer, but it's one that signifies a memory address. They're often typed so that arithmetic can be done properly (if you have a 32-bit integer pointer and increment it, it will point to the memory address 4 bytes ahead instead of 1). These are used for a variety of reasons, such as passing by reference instead of value, handling dynamically-allocated data, and iterating through arrays. They're notorious for causing problems when null, like segfaults and NullPointerExceptions, which you csn resolve by always checking for null before dereferencing. They can also be problematic if left dangling--referring to data that no longer exists or is no longer relevant (<3 Ada for making this a compile-time error unless you go out of your way to tell it to do it anyway).

On the topic of pass-by-reference vs pass-by-value, one caveat lots of people fall into is that there is a difference between passing data by reference and passing a pointer to data by value. This especially trips people up in languages like C# and Java where objects are secretly handled by pointer behind the scenes at all times. Try writing a swap function in Java and you'll see what I mean.

Dynamically-allocted data via pointers allows for lots of different dats structures, like linked lists, where one entry points to the next, trees, where one entry points to multiple children and graphs, where any entry can point to any other. Trees are great for organized data, graphs are great for complex data and linked lists are great just because you don't have to specify a length.

Tree traversal and search algorithms can check self, then children or children then self, and depending on which you choose, you'll be searching from the bottom up, or across from left to right/right to left. You could also make your own that search top down via breadth-first search techniques. There are also algorithms to balance the trees. Trees are used addditionally in huffman compression algorithms, which assign variable-length codes to different characters based on the frequency of apperance of thst character.

There are far too many graph search and spanning algorithms to touch on even a small assortment of here, but I do want to mention Dijkstra's algorithm, as we'll talk about it later. Dijkstra's algorithm is built around the unique idea of having each node build its own minimum spanning tree, an idea thst would be reused for networking.

Getting backto datatypes, a float datatype isn't actually a decimal value, as it appears to be. It's encoded on the hardware level as something like scientific notation, bit for binary: a sign bit, a few exponent bits (exponent of 2) and a mantissa which is the value to by multiplied by. For this reason, some values, like 0.3, which can be expressed easily in decimal, can't be stored with perfect precision in float format, which is why you want to use abs(a-b) < comparisons instead.

Character types will be encoded as integers, and in some languages the two types can be used the same way. This is why ASCII, unicode and all those other encodings exist. Strings are generally either arrays or linked lists of characters.
---
... Copied to Clipboard!
DevsBro
01/26/18 12:33:40 PM
#13:


Object-oriented languages have countless features that make programming more flexible, like polymorphism, by which one datatype can qualify as another, inheritance, by which one datatype inherits values from another, dynamic dispatching (which rules) which allows subprograms to be selected based on the datatype of the object passed to them.

Some languages are even interpreted in real-time instead of compiled. This means another program is reading the code and deciding what to do.

Heading back to hardware land, there are countless techniques used to improve the time or space efficiency of computer processors. Using a multicycle datapath will allow instructions that take less time to execute to complete without having to wait for the amount of time it would take a longer instruction before the processor moves on to the next instruction.

Pipelining can improve performance even more by almost achieving one (short) clock cycle per instruction, barring stalls/hazards in the pipeline and the initial fill of the pipeline. The concept is the same as an assembly line. One instruction is being read from memory while the previous is being decoded, while the one before is being executed, while the one before is having its result written to memory.

But pipelining increases the complexity of the processor a lot. What if one instruction needs to read a given register but the previous instruction hasn't finished writing its value? One option is to stall, or wait for the previous function to complete, but that kills your performance. Another option is to exercise out-of-order execution, where you find another instruction that you can do while you wait. This is accomplished via a "scoreboard" that keeps track of which insteuctions need which registers.

Another problem is branching. What do you do if you don't know which instruction to load next? Eager execution says "do both" but that leaves you open to Meltdown and Spectre. Branch prediction says take a guess, but that risks you having to go back when you guess wrong AND leaves you open to Meltdown and Spectre. I'm looking forward to what solutions we will make for these exploits.

There's also superscalar architecture, where you have multiple execution units, which also relies on out-of-order execution.
---
... Copied to Clipboard!
DevsBro
01/26/18 12:35:05 PM
#14:


Parallel processing and SIMD capabilities also help execution times. PP is using multiple computers at once to solve a single problem. SIMD is single-instruction-miltiple-data, which is one common PP parsdigm nut because it's single instruction, a single computer can have some manner of SIMD by putting multiple small values into one register and not carrying between words. MIMD is another, where you have different machines doing different parts of the job on different data, and finally MISD, which is where you have multiple machines doing the different stuff to the same data. There are C api's that can be used to write parallel programs. CUDA is one I have experience with for fine-grain GPU's, MPI is one I used for coarse-grain supercomputer batch jobs.

There are lots of logical arrangements you can use for parallel processing, when the data needs to be passed from one machine to another. You can put them all in a line, but this is slow if you need to go from one machine to one ont he other end of the chain. You can loop it around at the end too. Or you can make a 2D grid, looped. Or a 3D, 4D, 5D, etc, grid, looped. There's also what was called the hypercube, where you only have two processors in any given dimension, and you just add more dimensions as you add processors. You can cut down on the overhead by pipelining messages broken down into "flits" (small pieces of the message).

Not every job is perfectly parallelizable, as some will still have an unparallelizable part.

PP, just like other types of concurrency, has the potential for conflicts where two processors need to access the same data. More on this later.

Operating systems. Most of the functionality people generally associate with an opersting system actually has little to do with the basic purpose of an operating system. The basic purpose is to handle processes. It evolved from the concept of time sharing from early computing, where multiple users would submit a job to the same machine and it would handle them as the users thought of the next thing they wanted it to do.

The operating system treats processes like those old-timey users. It will run one process (program, if you will) until the program needs to wait for input (potentially from memory/drive), or until some other conditions are met that the OS decides mean the process needs to be rotated out (time is often a factor). Every operating system has its own state table for processes, on what conditions to swap them out, what stste to move them to in that case, what those ststes are, etc. One reason Linux and OSX are so similar is because this functionality is so similar.

Threading is the Operating System concurrency concept. The OS will generally assign threads to cores of the processor, but will sometimes assign multiple threads to a core. Threads are easier and faster to switch than processes. As with other types of concurrency, you have concurrency conflicts.

Deadlock arises when two or more threads are waiting on each other. The dining philosophers problem is the thought experiment on that one. One solution is a mutex or semaphore, which is like having a waiter at the table to decide who gets the resources. Another solution is to assign a rank and choose based in seniority. Livelock is a similar situation where each attempts to relinquish control but keeps getting in the others' way as it does so.

The producer/consumer problem. If one thread is generating data, and a faster one is reading it, how do you make them play nice? One solution is that the consumer has to wait.
---
... Copied to Clipboard!
DevsBro
01/26/18 12:35:18 PM
#15:


Networking. Routers and switches are used to manage communicstion between multiple computers. They ofteh generate a minimum spanning tree via Dijkstra's algorithm. Data is sent in packets. Each packet includes its destination so that the router knows where to send it. They use shift registers to transmit the data. Serial vs parallel communication ports. One bit at a time vs several? Each router has a buffer it uses for dsts it has not yet routed. If the buffer overflows, data is lost. TCP protocol will then resend it. UDP doesn't care.

The five layers of the Internet. Physical, data link, transport, network, application. Total delay = propagation delay + queuing delay + transmission delay. Socket programming. HTML defines a document, javascript is used for form validation before sending, PHP for servers, but neither has to be used for that, sql for databases, xml for data storage, json, yaml, ajax, FTTP, FTTH, FQDN, HTTPS, AD, FTP, SFTP, FTPS, SSH, SSL, encryption, SQLIA, buffer overflow attacks, XSS, PuTTY, Xming, CYGWIN, cookies, sesssions, headers, methods, ASP, exeptions, embedded systems, RTOS, DAC, ADC, FPGA's, PLC's, SAD, PID, SPI, USB, ethernet, crossover cables, DNS, hosts file, localhost, XAMPP, Apache, Tomcat, MVC, ASP.net, AHHHHHHHHH I DON'T WANT TO WRITE ANY MORE!!!
---
... Copied to Clipboard!
Anarchy_Juiblex
01/26/18 12:35:36 PM
#16:


LHC probably takes the cake.
---
"Tolerance of intolerance is cowardice." ~ Ayaan Hirsi Ali
... Copied to Clipboard!
Hummer 2
01/26/18 12:36:42 PM
#17:


those realistic sex dolls
... Copied to Clipboard!
Smashingpmkns
01/26/18 12:36:50 PM
#18:


The simulation we are living in right now
---
Posted with GameRaven 3.3
... Copied to Clipboard!
DevsBro
01/26/18 12:41:08 PM
#19:


computers are fairly complex as well
yes, that includes smartphones

Computers include smartphones. Yes, this is exactly my point.
---
... Copied to Clipboard!
Sonic23004
01/26/18 12:42:11 PM
#20:


Rick and Morty
---
Gameplay is a major part of video games, yes. But it isn't what makes video gamed great.
-UltimaSora91
... Copied to Clipboard!
MC_BatCommander
01/26/18 12:44:09 PM
#21:


Computers are pretty incredible. Like I look at a motherboard and it blows my fucking mind. How does this thing make my whole computer work? How does this weird stick with big fans on it that I plug into my motherboard make the pretty graphics?
---
The Legend is True!
... Copied to Clipboard!
Romes187
01/26/18 12:45:19 PM
#22:


evolution doesn't create just fyi

its competence without comprehension
... Copied to Clipboard!
UT1999
01/26/18 12:45:22 PM
#23:


MC_BatCommander posted...
Computers are pretty incredible. Like I look at a motherboard and it blows my fucking mind. How does this thing make my whole computer work? How does this weird stick with big fans on it that I plug into my motherboard make the pretty graphics?

but is it more complex than what nature has ever created? like the eye etc?
---
"Sometimes they even attack wounded foxes"
... Copied to Clipboard!
Ninja-Yatsu
01/26/18 12:47:21 PM
#24:


Sonic23004 posted...
Rick and Morty

"To be fair, you have to have a very high IQ to understand Rick and Morty...
---
Leleportation: Teleporting away while your laughter breaks physics by lingering when you've already teleported away.
... Copied to Clipboard!
KILBOTz
01/26/18 12:48:53 PM
#25:


greatest invention is the Saturn V rocket. most complex, don't really care.
---
... Copied to Clipboard!
UT1999
01/26/18 12:53:26 PM
#26:


which one was the saturn v rocket? imo it might be putting a man on the moon and then able to return him to earth
---
"Sometimes they even attack wounded foxes"
... Copied to Clipboard!
Hicks233
01/26/18 12:53:59 PM
#27:


Most complex? Human Society.

Greatest? Language.
---
... Copied to Clipboard!
KILBOTz
01/26/18 12:54:38 PM
#28:


UT1999 posted...
which one was the saturn v rocket? imo it might be putting a man on the moon and then able to return him to earth


yeah it was the one for the Apollo missions.
---
... Copied to Clipboard!
treewojima
01/26/18 12:55:08 PM
#29:


showoff
... Copied to Clipboard!
Nazanir
01/26/18 12:56:28 PM
#30:


Magnets.
---
XboX GT/Steam/Wii-U - Nazanir
... Copied to Clipboard!
#31
Post #31 was unavailable or deleted.
__Cam__
01/26/18 1:04:23 PM
#32:


Either a microprocessor or a particle accelerator.
---
i5-7600K (Kaby Lake) | GTX 1070 | 16GB DDR4 | GIGABYTE GA-Z270-HD3
... Copied to Clipboard!
AmonAmarth
01/26/18 1:08:43 PM
#33:


the toilet
the automobile
the plane
computer chips
supercomputers (and they all use linux).
pizza

:)
---
i7-4790@ 3.6GHZ | GA-Z97-HD3 | ASUS GTX 960 2GB | Samsung 850 EVO 250GB | 1TB HDD | CX750M | 12GB DDR3
... Copied to Clipboard!
Rexdragon125
01/26/18 1:09:49 PM
#34:


DevsBro posted...
tldr

No one person knows all the implementation details of that. You don't have logic and block diagrams of your specific CPU, the source of its microcode, logic diagrams of all the other little chips on your motherboard, etc. You don't have to. You're a programmer, so you can do your job as long as everything else does its job.

Until your PC starts randomly BSODing and you find out the error code can be caused by literally anything. Fuck computers.
... Copied to Clipboard!
DevsBro
01/26/18 1:26:25 PM
#35:


You don't have logic and block diagrams of your specific CPU, the source of its microcode

It's funny you mention that because I actually do. I literally have a tab open in my browser right now.

But never mind that, this is why I say nobody knows everything about anything.
---
... Copied to Clipboard!
Topic List
Page List: 1