System Internal Controls Management Accountants
System Internal Controls Management Accountants
System Internal Controls Management Accountants
A management information system (MIS) is a system or process that provides information needed to manage
organizations effectively. Management information systems are regarded to be a subset of the overall internal
controls procedures in a business, which cover the application of people, documents, technologies, and procedures
used by management accountants to solve business problems such as costing a product, service or a business-wide
strategy. Management information systems are distinct from regular information systems in that they are used to
analyze other information systems applied in operational activities in the organization. Academically, the term is
commonly used to refer to the group of information management methods tied to the automation or support of human
Why objective:- Every company or organization need objective for the working or complete the needs.The objective of an
MIS system--to provide useful information, data and analysis--remains constant, but the features and uses are
customizable to suit the preferences and needs of every business, individual or government.Management
Information System which is used in Sales, Finance to analyze raw data into meaningful data for Planning,
Scheming, Decision making and go forward plan. This is very useful in every aspect of Business in companies .
The Transmission Control Protocol (TCP) is one of the core protocols of the Internet Protocol Suite. TCP is one of the two
original components of the suite, complementing the Internet Protocol (IP) and therefore the entire suite is commonly referred to
as TCP/IP. TCP provides the service of exchanging data reliably directly between two network hosts, whereas IP handles
addressing and routing message across one or more networks. In particular, TCP provides reliable, ordered delivery of a stream of
bytes from a program on one computer to another program on another computer. TCP is the protocol that major Internet
applications rely on, such as the World Wide Web, e-mail, and file transfer. Other applications, that do not require reliable data
stream service, use a sister protocol, the User Datagram Protocol (UDP) which provides a datagram service, which emphasizes
reduced latency over reliability.
Protocol operation
TCP protocol operations may be divided into three phases. Connections must be properly established in a multi-step handshake
process (connection establishment) before entering the data transfer phase. After data transmission is completed, the connection
A TCP connection is managed by an operating system through a programming interface that represents the local end-point for
communications, the Internet socket. During the lifetime of a TCP connection it undergoes a series of state changes:
1. LISTEN : In case of a server, waiting for a connection request from any remote client.
2. SYN-SENT : waiting for the remote peer to send back a TCP segment with the SYN and ACK flags set. (usually set by
TCP clients)
3. SYN-RECEIVED : waiting for the remote peer to send back an acknowledgment after having sent back a connection
4. ESTABLISHED : the port is ready to receive/send data from/to the remote peer.
5. FIN-WAIT-1
6. FIN-WAIT-2
7. CLOSE-WAIT
8. CLOSING
9. LAST-ACK
10. TIME-WAIT : represents waiting for enough time to pass to be sure the remote peer received the acknowledgment of its
connection termination request. According to RFC 793 a connection can stay in TIME-WAIT for a maximum of four
minutes.
11. CLOSED
Connection establishment
To establish a connection, TCP uses a three-way handshake. Before a client attempts to connect with a server, the server must first
bind to a port to open it up for connections: this is called a passive open. Once the passive open is established, a client may initiate
an active open. To establish a connection, the three-way (or 3-step) handshake occurs:
1. The active open is performed by the client sending a SYN to the server. It sets the segment's sequence number to a
random value A.
2. In response, the server replies with a SYN-ACK. The acknowledgment number is set to one more than the received
sequence number (A + 1), and the sequence number that the server chooses for the packet is another random number,
B.
3. Finally, the client sends an ACK back to the server. The sequence number is set to the received acknowledgement value
i.e. A + 1, and the acknowledgement number is set to one more than the received sequence number i.e. B + 1.
At this point, both the client and server have received an acknowledgment of the connection.
Resource usage
Most implementations allocate an entry in a table that maps a session to a running operating system process. Because TCP
packets do not include a session identifier, both endpoints identifies the session using the client's address and port. Whenever a
packet is received, the TCP implementation must perform a lookup on this table to find the destination process.
The number of sessions in the server side is limited only by memory and can grow as new connections arrive, but the client must
allocate a random port before sending the first SYN to the server. This port remains allocated during the whole conversation, and
effectively limits the number of outgoing connections from each of the client's IP addresses. If an application fails to properly close
unrequired connections, a client can run out of resources and become unable to establish new TCP connections, even from other
applications.
Both endpoints must also allocate space for unacknowledged packets and received (but unread) data.
Data transfer
There are a few key features that set TCP apart from User Datagram Protocol:
Ordered data transfer - the destination host rearranges according to sequence number
Flow control - limits the rate a sender transfers data to guarantee reliable delivery. The receiver continually hints the
sender on how much data can be received (controlled by the sliding window). When the receiving host's buffer fills, the next
acknowledgment contains a 0 in the window size, to stop transfer and allow the data in the buffer to be processed congestion
control
Reliable transmission
TCP uses a sequence number to identify each byte of data. The sequence number identifies the order of the bytes sent from each
computer so that the data can be reconstructed in order, regardless of any fragmentation, disordering, or packet loss that may occur
during transmission. For every payload byte transmitted the sequence number must be incremented. In the first two steps of the 3-
way handshake, both computers exchange an initial sequence number (ISN). This number can be arbitrary, and should in fact be
TCP primarily uses a cumulative acknowledgment scheme, where the receiver sends an acknowledgment signifying that the
receiver has received all data preceding the acknowledged sequence number. Essentially, the first byte in a segment's data field is
assigned a sequence number, which is inserted in the sequence number field, and the receiver sends an acknowledgment
specifying the sequence number of the next byte they expect to receive. For example, if computer A sends 4 bytes with a sequence
number of 100 (conceptually, the four bytes would have a sequence number of 100, 101, 102 and 103 assigned) then the receiver
would send back an acknowledgment of 104 since that is the next byte it expects to receive in the next packet.
In addition to cumulative acknowledgments, TCP receivers can also send selective acknowledgments to provide further information .
If the sender infers that data has been lost in the network, it retransmits the data.
Error detection
Sequence numbers and acknowledgments cover discarding duplicate packets, retransmission of lost packets, and ordered-data
The TCP checksum is a weak check by modern standards. Data Link Layers with high bit error rates may require additional link
error correction/detection capabilities. The weak checksum is partially compensated for by the common use of a CRC or better
integrity check at layer 2, below both TCP and IP, such as is used in PPP or the Ethernet frame. However, this does not mean that
the 16-bit TCP checksum is redundant: remarkably, introduction of errors in packets between CRC-protected hops is common, but
the end-to-end 16-bit TCP checksum catches most of these simple errors .This is the end-to-end principle at work.
Flow control
TCP uses an end-to-end flow control protocol to avoid having the sender send data too fast for the TCP receiver to receive and
process it reliably. Having a mechanism for flow control is essential in an environment where machines of diverse network speeds
communicate. For example, if a PC sends data to a hand-held PDA that is slowly processing received data, the PDA must regulate
TCP uses a sliding window flow control protocol. In each TCP segment, the receiver specifies in the receive window field the
amount of additional received data (in bytes) that it is willing to buffer for the connection. The sending host can send only up to that
amount of data before it must wait for an acknowledgment and window update from the receiving host.
TCP sequence numbers and receive windows behave very much like a clock. The receive window shifts each time the receiver receives and acknowledges a new segment of data. Once it runs out of sequence
When a receiver advertises a window size of 0, the sender stops sending data and starts the persist timer. The persist timer is
used to protect TCP from a deadlock situation that could arise if the window size update from the receiver is lost and the sender has
no more data to send while the receiver is waiting for the new window size update. When the persist timer expires, the TCP sender
sends a small packet so that the receiver sends an acknowledgement with the new window size.
If a receiver is processing incoming data in small increments, it may repeatedly advertise a small receive window. This is referred to
as the silly window syndrome, since it is inefficient to send only a few bytes of data in a TCP segment, given the relatively large
overhead of the TCP header. TCP senders and receivers typically employ flow control logic to specifically avoid repeatedly sending
small segments. The sender-side silly window syndrome avoidance logic is referred to as Nagle's algorithm.
Congestion control
The final main aspect of TCP is congestion control. TCP uses a number of mechanisms to achieve high performance and avoid
'congestion collapse', where network performance can fall by several orders of magnitude. These mechanisms control the rate of
data entering the network, keeping the data flow below a rate that would trigger collapse.
Acknowledgments for data sent, or lack of acknowledgments, are used by senders to infer network conditions between the TCP
sender and receiver. Coupled with timers, TCP senders and receivers can alter the behavior of the flow of data. This is more
Modern implementations of TCP contain four intertwined algorithms: Slow-start, congestion avoidance, fast retransmit, and fast
recovery(RFC 5681).
In addition, senders employ a retransmission timeout (RTO) that is based on the estimated round-trip time (or RTT) between the
sender and receiver, as well as the variance in this round trip time. The behavior of this timer is specified in RFC 2988. There are
subtleties in the estimation of RTT. For example, senders must be careful when calculating RTT samples for retransmitted packets;
typically they use Karn's Algorithm or TCP timestamps (see RFC 1323). These individual RTT samples are then averaged over time
to create a Smoothed Round Trip Time (SRTT) using Jacobson's algorithm. This SRTT value is what is finally used as the round-trip
time estimate.
Enhancing TCP to reliably handle loss, minimize errors, manage congestion and go fast in very high-speed environments are
ongoing areas of research and standards development. As a result, there are a number of TCP congestion avoidance
algorithm variations.
Maximum segment size
The Maximum segment size (MSS) is the largest amount of data, specified in bytes, that TCP is willing to send in a single segment.
For best performance, the MSS should be set small enough to avoid IP fragmentation, which can lead to excessive retransmissions
if there is packet loss. To try to accomplish this, typically the MSS is negotiated using the MSS option when the TCP connection is
established, in which case it is determined by the maximum transmission unit (MTU) size of the data link layer of the networks to
which the sender and receiver are directly attached. Furthermore, TCP senders can use Path MTU discovery to infer the minimum
MTU along the network path between the sender and receiver, and use this to dynamically adjust the MSS to avoid IP fragmentation
Selective acknowledgments
Relying purely on the cumulative acknowledgment scheme employed by the original TCP protocol can lead to inefficiencies when
packets are lost. For example, suppose 10,000 bytes are sent in 10 different TCP packets, and the first packet is lost during
transmission. In a pure cumulative acknowledgment protocol, the receiver cannot say that it received bytes 1,000 to 9,999
successfully, but failed to receive the first packet, containing bytes 0 to 999. Thus the sender may then have to resend all 10,000
bytes.
To solve this problem TCP employs the selective acknowledgment (SACK) option, defined in RFC 2018, which allows the receiver
to acknowledge discontinuous blocks of packets that were received correctly, in addition to the sequence number of the last
contiguous byte received successively, as in the basic TCP acknowledgment. The acknowledgement can specify a number
of SACK blocks, where each SACK block is conveyed by the starting and ending sequence numbers of a contiguous range that the
receiver correctly received. In the example above, the receiver would send SACK with sequence numbers 1,000 and 9,999. The
An extension to the SACK option is the "duplicate-SACK" option, defined in RFC 2883. An out-of-order packet delivery can often
falsely indicate the TCP sender of lost packet and, in turn, the TCP sender retransmits the suspected-to-be-lost packet and slow
down the data delivery to prevent network congestion. The TCP sender undoes the action of slow-down, that is a recovery of the
original pace of data transmission, upon receiving a D-SACK that indicates the retransmitted packet is duplicate.
The SACK option is not mandatory and it is used only if both parties support it. This is negotiated when connection is established.
SACK uses the optional part TCP segment structure. The use of SACK is widespread - all popular TCP stacks support it. Selective
flow of data and its value is limited to between 2 and 65,535 bytes.
Since the size field cannot be expanded, a scaling factor is used. The TCP window scale option, as defined in RFC 1323, is an
option used to increase the maximum window size from 65,535 bytes to 1 Gigabyte. Scaling up to larger window sizes is a part of
The window scale option is used only during the TCP 3-way handshake. The window scale value represents the number of bits to
left-shift the 16-bit window size field. The window scale value can be set from 0 (no shift) to 14 for each direction independently.
Both sides must send the option in their SYN segments to enable window scaling in either direction.
Some routers and packet firewalls rewrite the window scaling factor during a transmission. This causes sending and receiving sides
to assume different TCP window sizes. The result is non-stable traffic that may be very slow. The problem is visible on some
TCP timestamps, defined in RFC 1323, help TCP compute the round-trip time between the sender and receiver. Timestamp options
include a 4-byte timestamp value, where the sender inserts its current value of its timestamp clock, and a 4-byte echo reply
timestamp value, where the receiver generally inserts the most recent timestamp value that it has received. The sender uses the
echo reply timestamp in an acknowledgment to compute the total elapsed time since the acknowledged segment was sent. ]
TCP timestamps are also used to help in the case where TCP sequence numbers encounter their 2 32 bound and "wrap around" the
sequence number space. This scheme is known asProtect Against Wrapped Sequence numbers, or PAWS. Furthermore, the Eifel
detection algorithm, defined in RFC 3522, which detects unnecessary loss recovery requires TCP timestamps.
Out of band data
One is able to interrupt or abort the queued stream instead of waiting for the stream to finish. This is done by specifying the data
as urgent. This tells the receiving program to process it immediately, along with the rest of the urgent data. When finished, TCP
informs the application and resumes back to the stream queue. An example is when TCP is used for a remote login session, the
user can send a keyboard sequence that interrupts or aborts the program at the other end. These signals are most often needed
when a program on the remote machine fails to operate correctly. The signals must be sent without waiting for the program to finish
TCP OOB data was not designed for the modern Internet. The urgent pointer only alters the processing on the remote host and
doesn't expedite any processing on the network itself. When it gets to the remote host there are two slightly different interpretations
of the protocol, which means only single bytes of OOB data are reliable. This is assuming it's reliable at all as it's one of the least
Normally, TCP waits for the buffer to exceed the maximum segment size before sending any data. This creates serious delays when
the two sides of the connection are exchanging short messages and need to receive the response before continuing. For example,
the login sequence at the beginning of a telnet session begins with the short message "Login", and the session cannot make any
progress until these five characters have been transmitted and the response has been received. This process can be seriously
delayed by TCP's normal behavior when the message is provided to TCP in several send calls.
However, an application can force delivery of segments to the output stream using a push operation provided by TCP to the
application layer.[2] This operation also causes TCP to set the PSH flag or control bit to ensure that data is delivered immediately to
In the most extreme cases, for example when a user expects each keystroke to be echoed by the receiving application,
the push operation can be used each time a keystroke occurs. More generally, application programs use this function to force output
to be sent after writing a character or line of characters. By forcing the data to be sent immediately, delays and wait time are
reduced.
Connection termination
The connection termination phase uses, at most, a four-way handshake, with each side of the connection terminating independently.
When an endpoint wishes to stop its half of the connection, it transmits a FIN packet, which the other end acknowledges with an
ACK. Therefore, a typical tear-down requires a pair of FIN and ACK segments from each TCP endpoint.
A connection can be "half-open", in which case one side has terminated its end, but the other has not. The side that has terminated
can no longer send any data into the connection, but the other side can. The terminating side should continue read the data until the
It is also possible to terminate the connection by a 3-way handshake, when host A sends a FIN and host B replies with a FIN & ACK
(merely combines 2 steps into one) and host A replies with an ACK.This is perhaps the most common method.
It is possible for both hosts to send FINs simultaneously then both just have to ACK. This could possibly be considered a 2-way
handshake since the FIN/ACK sequence is done in parallel for both directions.
Some host TCP stacks may implement a "half-duplex" close sequence, as Linux or HP-UX do. If such a host actively closes a
connection but still has not read all the incoming data the stack already received from the link, this host sends a RST instead of a
FIN . This allows a TCP application to be sure the remote application has read all the data the former sent—waiting the FIN from the
remote side, when it actively closes the connection. However, the remote TCP stack cannot distinguish between a Connection
Aborting RST and this Data Loss RST. Both cause the remote stack to throw away all the data it received, but that the application
Some application protocols may violate the OSI model layers, using the TCP open/close handshaking for the application protocol
For a usual program flow like above, a TCP/IP stack like that described above does not guarantee that all the data arrives to the
other application unless the programmer is sure that the remote side will not send anything.
Vulnerabilities
Denial of service
By using a spoofed IP address and repeatedly sending purposely assembled SYN packets, attackers can cause the server to
consume large amounts of resources keeping track of the bogus connections. This is known as a SYN flood attack. Proposed
solutions to this problem include SYN cookies and Cryptographic puzzles. Sockstress is a similar attack, against which no defense
is yet known.An advanced DoS attack involving the exploitation of the TCP Persist Timer was analyzed
Connection hijacking
An attacker who is able to eavesdrop a TCP session and redirect packets can hijack a TCP connection. To do so, the attacker
learns the sequence number from the ongoing communication and forges a false segment that looks like the next segment in the
stream. Such a simple hijack can result in one packet being erroneously accepted at one end. When the receiving host
acknowledges the extra segment to the other side of the connection, synchronization is lost. Hijacking might be combined with ARP
or routing attacks that allow taking control of the packet flow, so as to get permanent control of the hijacked TCP connection. ]
Impersonating a different IP address was possible prior to RFC 1948, when the initial sequence number was easily guessable. That
allowed an attacker to blindly send a sequence of packets that the receiver would believe to come from a different IP address,
without the need to deploy ARP or routing attacks: it is enough to ensure that the legitimate host of the impersonated IP address is
down, or bring it to that condition using denial of service attacks. This is why the initial sequence number is chosen at random.
In information theory, entropy is a measure of the uncertainty associated with a random variable. The term by itself in this context
usually refers to the Shannon entropy, which quantifies, in the sense of an expected value, the information contained in a
message, usually in units such as bits. Equivalently, the Shannon entropy is a measure of the average information content one is
missing when one does not know the value of the random variable. The concept was introduced by Claude E. Shannon in his 1948
paper "A Mathematical Theory of Communication".
Example:-Consider tossing a coin with known, not necessarily fair, probabilities of coming up heads or tails.
The entropy of the unknown result of the next toss of the coin is maximized if the coin is fair (that is, if heads and tails both have
equal probability 1/2). This is the situation of maximum uncertainty as it is most difficult to predict the outcome of the next toss; the
However, if we know the coin is not fair, but comes up heads or tails with probabilities p and q, then there is less uncertainty. Every
time it is tossed, one side is more likely to come up than the other. The reduced uncertainty is quantified in a lower entropy: on
average each toss of the coin delivers less than a full 1 bit of information.
The extreme case is that of a double-headed coin which never comes up tails. Then there is no uncertainty. The entropy is zero:
Entropy
In physics, the word entropy has important physical implications as the amount of "disorder" of a system. In mathematics, a more abstract definition is used. The (Shannon) entropy of a
variable is defined as
bits, where is the probability that is in the state , and is defined as 0 if . The joint entropy of variables , ..., is then defined by
For most email users, using an email Spam filter to get rid of Spam is the only viable alternative to manually sifting
through large numbers of junk email every day.
User defined filters are included in most email clients today. With these filters you can forward email to different
mailboxes depending on headers or contents. For example, you would put email from each of your friends into a
mailbox named after them. You can also use these same filters to forward email to the trash if the origin or contents
are suspicious. To do this you need to carefully look at any Spam emails you receive. Try to notice common
characteristics, recurring patterns in senders’ email addresses, dubious claims in the subject line and so on. You will
soon find that Spam filtering using a small number of rules can eliminate a large number of Spam emails.
Header filters are more sophisticated. They look at the email headers to see if they are forged. Email headers
contain information in addition to the recipient, sender and subject fields displayed on your screen. They also
contain information regarding the servers that were used in delivering your email (the relay chain). Many spammers
do not want to be traced. They put false information in the email headers to prevent people from contacting them
directly. Some anti spam programs can detect forged headers which are a sure indication that the email is Spam.
Not all Spam has forged headers though, so this filter by itself is not sufficient.
Language filters simply filter out any email that is not in your native tongue. It only filters out foreign language
Spam, which is not a major problem today, unless the foreign language under question is English. In future,
languages other than English are expected to make up an increasingly large percentage of Internet
communications. If you do not expect to get emails in another language, this may be a quick and easy way to
eliminate some portion of your Spam.
Content filters scan the text of an email and use fuzzy logic to give a weighted opinion as to whether the email is
Spam. They can be highly effective, but can also occasionally filter out newsletters and other bulk email that may
appear to be Spam. This can usually be overridden by explicitly authorizing email from domains you subscribe to.
Permission filters block all email that does not come from an authorized source. Typically the first time you send an email
to a person using a permission filter you will receive an auto-response inviting you to visit a web page and enter some
information. Your email then becomes authorized and any future emails you send will be accepted. This is not suitable for
all users, but very effective for those that choose to use it, as long as the auto-response email is not blocked by the Spam
filter of the initial sender!
Set 3: Store messages to known legitimate addresses. I have several such rules, but they all
just match a literal To: field.
Set 4: Look for messages that have a legit address in the header, but that weren't caught by
the previous To: filters. I find that when I am only in the Bcc: field, it's almost always an
unsolicited mailing to a list of alphabetically sequential addresses (mertz1@..., mertz37@...,
etc).
Set 5: Anything left at this point is probably spam (it probably has forged headers to avoid
identification of the sender).
2. Whitelist / verification filters
A fairly aggressive technique for spam filtering is what I would call the "whitelist plus automated
verification" approach. There are several tools that implement a whitelist with verification: TDMA is a
popular multi-platform open source tool; ChoiceMail is a commercial tool for Windows; most others
seem more preliminary.
A whitelist filter connects to an MTA and passes mail only from explicitly approved recipients on to
the inbox. Other messages generate a special challenge response to the sender. The whitelist filter's
response contains some kind of unique code that identifies the original message, such as a hash or
sequential ID. This challenge message contains instructions for the sender to reply in order to be
added to the whitelist (the response message must contain the code generated by the whitelist
filter). Almost all spam messages contain forged return address information, so the challenge usually
does not even arrive anywhere; but even those spammers who provide usable return addresses are
unlikely to respond to a challenge. When a legitimate sender answers a challenge, her/his address is
added to the whitelist so that any future messages from the same address are passed through
automatically.
Although I have not used any of these tools more than experimentally myself, I would expect
whitelist/verification filters to be very nearly 100% effective in blocking spam messages. It is
conceivable that spammers will start adding challenge responses to their systems, but this could be
countered by making challenges slightly more sophisticated (for example, by requiring small human
modification to a code). Spammers who respond, moreover, make themselves more easily traceable
for people seeking legal remedies against them.
The problem with whitelist/verification filters is the extra burden they place on legitimate senders.
Inasmuch as some correspondents may fail to respond to challenges -- for any reason -- this makes
for a type of false positive. In the best case, a slight extra effort is required for legitimate senders.
But senders who have unreliable ISPs, picky firewalls, multiple e-mail addresses, non-native
understanding of English (or whatever language the challenge is written in), or who simply overlook
or cannot be bothered with challenges, may not have their legitimate messages delivered. Moreover,
sometimes legitimate "correspondents" are not people at all, but automated response systems with
no capability of challenge response. Whitelist/verification filters are likely to require extra efforts to
deal with mailing-list signups, online purchases, Web site registrations, and other "robot
correspondences".