1295Views13Replies

### Author Options:

I am interested in building certain components of computer processors with the use of discrete components only. I want to build on the individual component level (transistors, resistors, caps) and not on the gate level (quad and gate or hex inverter ic) and I have run into some problems.
I have a few questions and they are large. If you can answer any of them I would be glad but could you number them so I know which one you are talking about.

((1)) My first question is about how to build logic gates with discrete components. I have seen some schematic diagrams for and gates, or gates and inverters; they all seem to have some problems. For example, an and gate might have two bjt's where the base of each is an input.

http://hyperphysics.phy-astr.gsu.edu/hbase/electronic/ietron/and4.gif

My question stems from the saturation voltage of transistors. I assume that in logic circuits you want a transistor to act as a switch, and thus would want to make sure that the voltage drop is 0. I learned that to do this, you find the resistance value of the load you are trying to run and figure how much current it would take for the voltage drop to be Vcc. (Light bulb has a resistance of 100ohms, Vcc is 10V, V=IR dictates that the current running through the light bulb would be .1A). Then you divide this current by B (gain of transistor) to find the current that needs to be running through the base emitter path.

How do you find the load of a computer, where the output of a circuit might run through paths of great or little resistance? Is there a way to make logic circuits with transistors acting like switches while avoiding finding the saturation voltage?

((2)) How are computers made so that they can sit idle and then respond to inputs. This might sound like a stupid question but from a programming perspective the only way I could think to do it would be to create an infinite loop that constantly checks for inputs or a 1 in some input register.
The more I think about it the more it seems possible that this is how computers do it now; I know that the're able to run many programs at once so it seems possible. However, how did early computers do it? If a computer had only the capacity to run one program at a time then it seems a program accepting inputs would always have to come to a halt when an input is requires.

Is this truely the only way? How do modern computers accept inputs, how do computers that can only run 1 program at a time accept inputs?

((3)) I am also interested in memory circuitry. Looking at the types of ram, dram seems to be pretty strait forward. As I understand it sram is essentially memory made of logic grates where an outside signal initially acts as an input. The output of the logic circuit is then feed back in as an input and the circuit hold "itself" high or low (this description might be totally wrong). The problem is I have only seen schematics that show sram using CMOS. Is there a way to build sram using bjt's?

((4)) I have also had a chance to look at how rewritable rom (despite being read only) such as flash memory works. My first question is, why is this called read only? I get that it is nonvolatile so it's not ram, but it surely isn't read only memory either!

Are there any ways to make rewritable rom with discrete components? Flash involves, to my understanding, trapping electrons with some kind of tunneling. As far as I know, this can only be done in silicon. If I were to make some sort of computer (or more accurately: a device that demonstrates the properties of a computer) would the non volatile memory be limited to non-electrical things (brain, punch card, magnetic memory)?

((5)) How are jumps done in the opcode level? Assembly has specific commands for different types of jumps. Lets say you are running an 8 bit computer that had 8 bits of memory at each address, can't you only address 256 addresses (0 to 255)? Lets say the 8 bit computer processor has 128 unique operations, and an unconditional jump happens to be the 100th operation. In the memory you write your program to, the jump would be written as 1100100 (100 in decimal, the jump command). There would only be 1 bit left, hardly enough room to write the address of where you are actually trying to jump to.

My only conclusion is that you would tack a 0 to the front of the jump command making 01100100 and filling up the whole byte of memory. Then, when the computer reads the byte it recognizes the opcode for unconditional jump and reads the next byte of memory as the location of the jump and not  as a specific jump. Thus if you were trying to write an unconditional jump to address zero, the program would look like:
........
........
01100100
00000000

Is this how computers perform operations containing operational code and memory addresses that are written as one line in assembly code?

Thanks for your time and help!

Tags:

## Discussions

Best thing to answer this lot is to go out and buy a book on the subject. There are certain  books which are seminal electronic engineering books every electronic engineer should have, Scroggie "Foundations of Radio and Electronics", Horowitz and Hill "The Art of electronics", and Clive Maxfield's http://www.amazon.com/Bebop-Boolean-Boogie-Third-Unconventional/dp/1856175073/ref=pd_bxgy_b_text_b

Its not really possible to answer your questions without spending a ludicrous amount of time bringing you up to speed on some basic stuff, but get the book, inwardly digest, and ask again when you get stuck.

Actually I tried the same thing a while back. And the best way to do it is as orksecurity said - do proof-of-concept on all your gates, then scale up from there. And yes, you should run under saturation point.

I'm sure at some point you'll want to interface with pre-made components (LCD's, VGA monitors, etc) so you'll have to know what voltage ranges those work with to decide where you want your components at - especially with digital components, you'll need to keep everything in conformity with their on/off ranges unless you don't mind add MORE hardware to scale-up/down at every interface point.

A very useful site is falstad.com/circuit It is a sandbox-style electronics simulation that was very useful to me.

It can be done, but I wouldn't recommend trying it completely at the transistor level unless you have a lot of room, a lot of time, and a lot of patience. With a full-time job, my ALU took about 3 months; I didn't have it completely assembled at the transistor level at any one time (modular design interface with a mircrocontroller to fill in the blanks) but if I had it would have been maybe 10' x 10' (yes, feet).

Best of luck to you! Always keep in mind that the electrons follow the path of least resistance, so "walk through" your design to see where to tweak resistor values. Also be sure to check the spec sheets on your BJT's to make sure they can handle current demands.

Thanks for all of the advice, I have read more and seen how bjt's can be used in TTL logic open collector circuits that are robust. These require different values of resistors and such so it looked somewhat hard. I have read the allaboutcircuits.com digital volume on the construction and function of logic gates http://www.allaboutcircuits.com/vol_4/index.html.

I have also gained good understanding through that chapter of introductory cmos logic gates which seem to be both simpler to understand and easier to construct. I have a few questions about the parts used in cmos logic gates:

1. Where the chapter labeled resistors in TTL open collector logic gates, there was no mention of any in cmos logic gates (save maybe a few pullup resistors on the inputs). Are there any internal components needed other than transistors in cmos circuitry?

2. The article mentions mosfet transistors and judging by the name c[i]mos[/i] I am guessing that mosfet transistors are required as apposed to other varieties of fet's. Is this true?

3. Lastly and most importantly, I am on a severe hobbyist budget but I want to make a few gates of my own. While I am finding that bjt's can cost as low as 2 cents for both npn and pnp transistors I am having a harder time with mosfets. First off I make my own circuit boards and they are to crude for sot packages. So, I need through hole packaging. Also, there seem to be a lot less p channel than n channel mosfets, and more importantly I get the impression that certain p- channels make better complements to certain n-channel. Is it true that any old p-channel won't be really compatible with any old n-channel?

My requirements then are through hole and p-channel is complimentary with the n-channel. The variable factor is getting the lowest cost since I am considering buying a few hundred of each type.

The lowest n- channel through hole I can find is 6.5 cents in bulk (2n7000). P-channels seem to be as cheap as 25 cents in bulk and I don't even now if it is complimentary. Can anyone recommend cheap, through hole, n and p channel mosfets that make a good pair? Tall order I know.

Thanks again for any suggestions and insight.

Also, there are plans or at least circuits available for the Apollo Guidance computer, whcih would make an incredible project.

As others have said: If you want to build more than a few gates, this is a gawdawful huge project. I would suggest you take this hierarchically: Build a few gates out of transistors to prove that you can, build a few logic circuits out of small-scale ICs to prove that you can, repeat with medium scale, then build your computer out of LSI chips -- which will itself be a large project, even if you use bitslice microprocessors to provide the core of the machine; fleshing out a design of that sort was my thesis project..

That's sufficient to prove that you *could* build the whole machine from the ground up if you had the time and cash and education.

In fact: MIT had an *excellent* class which follows exactly this pat, starting from the transistor level and building layers of abstraction all the way up to computers, looking at how those abstractions simplify design and what their implications are for how typical computers behave. I would bet dollars to donuts that this is one of the classes they've put on line in their Open Courseware system. If you're really interested in this I HIGHLY recommend that you take the time to work through that class if you can access it; it is an excellent approach to exactly what you've asked here.

And, yes, the fact that it takes MIT a full term to properly cover the questions you're asking is, again, an indication of the size of your question.

BTW, ideally you do *not* want to run the transistors into saturation; that makes them slower to switch back again.

I suggest building one with gates, perhaps following a pre-existing design first. Once your feet are thoroughly wet by the experience, you can jump into the deep end of the Mariana Trench...

> ((1)) My first question is about how to build logic gates with discrete components.
. That one's way over my head

> ((2)) How are computers made so that they can sit idle and then respond to inputs. This might sound like a stupid question but from a programming perspective the only way I could think to do it would be to create an infinite loop that constantly checks for inputs or a 1 in some input register.
.  That's basically what happens, but it is based on interrupts (eg, NMI and IRQ). You have to have a routine that is constantly checking for interrupts.

> ((3)) & ((4))

> ((5)) How are jumps done in the opcode level?
.  Many architectures will allow you to use two or more bytes/words to do memory paging and far jumps.

Yes of course you COULD build a basic computer from discrete components and people ave.

The tone of your questions suggest you have a long way to go before this could become a possibility for you though with your present level of knowledge.

You can build a working CPU from relays if you so desire.