-
Computer
Networking: A Top-Down Approach,
7
th
Edition
Solutions to Review
Questions and Problems
Version Date:
December 2016
This
document
contains
the
solutions
to
review
questions
and
problems
for
the
7th
edition
of
Computer Networking: A Top-Down
Approach
by Jim Kurose and Keith Ross.
These solutions are being made
available to instructors ONLY. Please do NOT copy
or
distribute
this
document
to
others
(even
other
instructors).
Please
do
not
post
any
solutions on a
publicly-
available Web site. We’ll be
happy to provide a copy (up
-to-date)
of this solution manual ourselves to
anyone who asks.
Acknowledgments:
Over
the
years,
several
students
and
colleagues
have
helped
us
prepare this solutions manual. Special
thanks goes to Honggang Zhang, Rakesh Kumar,
Prithula Dhungel, and Vijay
Annapureddy. Also thanks to all the readers who
have made
suggestions and corrected
errors.
All material ?
copyright 1996-2016 by J.F. Kurose and K.W. Ross.
All rights reserved
Chapter 1 Review Questions
1.
There is no
difference. Throughout this text, the words “host”
and “end system” are
used
interchangeably.
End
systems
include
PCs,
workstations,
Web
servers,
mail
servers, PDAs, Internet-connected game
consoles, etc.
2.
From
Wikipedia: Diplomatic protocol is commonly
described as a set of international
courtesy rules. These well-established
and time-honored rules have made it easier for
nations and people to
live
and work together. Part of protocol
has
always been the
acknowledgment of the hierarchical
standing of all present. Protocol rules are based
on the principles of civility.
3.
Standards are
important for protocols so that people can create
networking systems
and products that
interoperate.
4.
1. Dial-up modem over telephone line:
home; 2. DSL over telephone line:
home
or
small office; 3. Cable to HFC: home;
4. 100 Mbps switched Ethernet: enterprise; 5.
Wifi (802.11): home and enterprise: 6.
3G and 4G: wide-area wireless.
5.
HFC bandwidth
is shared among the users. On the downstream
channel, all packets
emanate from a
single source, namely, the head end. Thus, there
are no collisions in
the downstream
channel.
6.
In most American cities, the current
possibilities include: dial-up; DSL; cable modem;
fiber-to-the-home.
7. Ethernet LANs have transmission
rates of 10 Mbps, 100 Mbps, 1 Gbps and 10 Gbps.
8. Today, Ethernet most
commonly runs over twisted-pair copper wire. It
also can run
over fibers optic links.
9.
Dial
up
modems:
up
to
56
Kbps,
bandwidth
is
dedicated;
ADSL:
up
to
24
Mbps
downstream and 2.5 Mbps upstream,
bandwidth is dedicated; HFC, rates up to 42.8
Mbps and upstream rates of up to 30.7
Mbps, bandwidth is shared. FTTH: 2-10Mbps
upload; 10-20 Mbps download; bandwidth
is not shared.
10. There
are two popular wireless Internet access
technologies today:
a)
Wifi (802.11)
In a wireless LAN, wireless users transmit/receive
packets to/from an
base station
(i.e., wireless access point) within a radius of
few tens of meters. The
base
station
is
typically
connected
to
the
wired
Internet
and
thus
serves
to
connect
wireless users to the wired network.
b)
3G and 4G
wide-area wireless access networks. In
these
systems,
packets
are
transmitted over the same wireless
infrastructure used for cellular telephony, with
the
base
station
thus
being
managed
by
a
telecommunications
provider.
This
provides
wireless access to users within a
radius of tens of kilometers of the base station.
11.
At
time
t
0
the
sending
host
begins
to
transmit.
At
time
t
1
=
L/R
1
,
the
sending
host
completes transmission and the entire
packet is received at the router (no propagation
delay). Because
the router has the entire packet at time
t
1
, it can begin
to transmit the
packet to the receiving
host at time
t
1
.
At time
t
2
=
t
1
+
L/R
2
, the router completes
transmission
and
the
entire
packet
is
received
at
the
receiving
host
(again,
no
propagation delay). Thus, the end-to-
end delay is
L/R
1
+ L/R
2
.
12. A circuit-switched network can
guarantee a certain amount of end-to-end bandwidth
for
the
duration
of
a
call.
Most
packet-switched
networks
today
(including
the
Internet)
cannot
make
any
end-to-
end
guarantees
for
bandwidth.
FDM
requires
sophisticated
analog hardware to shift signal into appropriate
frequency bands
.
13. a) 2 users can be supported
because each user requires half of the link
bandwidth.
b) Since
each user requires 1Mbps when transmitting, if two
or fewer users transmit
simultaneously,
a
maximum
of
2Mbps
will
be
required.
Since
the
available
bandwidth of the shared link is 2Mbps,
there will be no queuing delay before the
link. Whereas, if three
users
transmit
simultaneously,
the
bandwidth
required
will be 3Mbps which is more
than the available bandwidth of the shared link.
In
this case, there will be
queuing delay before the link.
c) Probability that a given
user is transmitting = 0.2
?
3
?
3
3<
/p>
?
3
?
d) Probability that all three users are
transmitting simultaneously =
?
?
?
p>
p
1
?
p
?
3
?
?
?
3
=
(0.2)
= 0.008
.
Since the queue grows when all the
users are transmitting, the
fraction of time during which the queue
grows (which is equal to the probability
that all three users are
transmitting simultaneously) is 0.008.
14. If the two ISPs do not
peer with each other, then when they send traffic
to each other
they
have
to
send
the
traffic
through
a
provider
ISP
(intermediary),
to
which
they
have
to pay for carrying the traffic. By peering with
each other directly, the two ISPs
can
reduce their payments to their provider ISPs. An
Internet Exchange Points (IXP)
(typically in a standalone building
with its own switches) is a meeting point where
multiple ISPs can connect and/or peer
together. An ISP earns its money by charging
each of the the ISPs that connect to
the IXP a relatively small fee, which may depend
on
the
amount
of
traffic
sent
to
or
received
from
the
IXP.
15. Google's private
network connects together all its data centers,
big and small. Traffic
between the
Google data centers passes over its private
network rather than over the
public
Internet. Many of these data centers are located
in, or close to, lower tier ISPs.
Therefore, when Google delivers content
to a user, it often can bypass higher tier ISPs.
What motivates content providers to
create these networks? First, the content provider
has more control over the user
experience, since it has to use few intermediary
ISPs.
Second, it can save money
by sending
less traffic into
provider networks. Third, if
ISPs
decide
to
charge
more
money
to
highly
profitable
content
providers
(in
countries
where
net
neutrality
doesn't
apply),
the
content
providers
can
avoid
these
extra payments.
16. The delay
components are processing delays, transmission
delays, propagation delays,
and
queuing
delays.
All
of
these
delays
are
fixed,
except
for
the
queuing
delays,
which are variable.
17. a) 1000 km, 1 Mbps, 100 bytes
b) 100 km, 1 Mbps, 100 bytes
18. 10msec; d/s; no; no
19. a) 500 kbps
b) 64 seconds
c) 100kbps; 320
seconds
20. End system A
breaks the large file into chunks. It adds header
to each chunk, thereby
generating
multiple packets from the file. The header in each
packet includes the IP
address of the
destination (end system B). The packet switch uses
the destination IP
address
in
the
packet
to
determine
the
outgoing
link.
Asking
which
road
to
take
is
analogous to a packet
asking
which outgoing link it should be forwarded on,
given
the packet’s
destination address.
21. The maximum emission rate is 500
packets/sec and the maximum transmission rate is
350 packets/sec. The corresponding
traffic intensity is 500/350 =1.43 > 1.
Loss will
eventually
occur
for
each
experiment;
but
the
time
when
loss
first
occurs
will
be
different
from
one
experiment
to
the
next
due
to
the
randomness
in
the
emission
process.
22.
Five
generic
tasks
are
error
control,
flow
control,
segmentation
and
reassembly,
multiplexing,
and
connection
setup.
Yes,
these
tasks
can
be
duplicated
at
different
layers. For example, error control is
often provided at more than one layer.
23.
The
five
layers
in
the
Internet
protocol
stack
are
–
from
top
to
bottom
–
the
application
layer,
the
transport
layer,
the
network
layer,
the
link
layer,
and
the
physical
layer. The principal responsibilities are outlined
in Section 1.5.1.
24.
Application-layer message: data which an
application wants to send and passed onto
the
transport
layer;
transport-layer
segment:
generated
by
the
transport
layer
and
encapsulates
application-
layer
message
with
transport
layer
header;
network-layer
datagram:
encapsulates
transport-layer
segment
with
a
network-
layer
header;
link-
layer frame:
encapsulates network-layer datagram with a link-
layer header.
25. Routers
process network, link and physical layers (layers
1 through 3). (This is a little
bit
of
a
white
lie,
as
modern
routers
sometimes
act
as
firewalls
or
caching
components,
and process
Transport layer
as well.)
Link
layer switches process
link
and physical layers
(layers 1 through2). Hosts process all five
layers.
26. a) Virus
Requires
some
form
of
human
interaction
to
spread.
Classic
example:
E-mail
viruses.
b) Worms
No
user
replication
needed.
Worm
in
infected
host
scans
IP
addresses
and
port
numbers,
looking for vulnerable processes to infect.
27. Creation of a botnet
requires an attacker to find vulnerability in some
application or
system
(e.g.
exploiting
the
buffer
overflow
vulnerability
that
might
exist
in
an
application). After
finding the vulnerability, the attacker needs to
scan for hosts that
are
vulnerable.
The
target
is
basically
to
compromise
a
series
of
systems
by
exploiting
that
particular
vulnerability.
Any
system
that
is
part
of
the
botnet
can
automatically scan its environment and
propagate by exploiting the vulnerability. An
important property of such botnets is
that the originator of the botnet can remotely
control
and
issue
commands
to
all
the
nodes
in
the
botnet.
Hence,
it
becomes
possible
for
the
attacker
to
issue
a
command
to
all
the
nodes,
that
target
a
single
node (for example,
all nodes in the botnet might be commanded by the
attacker to
send
a
TCP
SYN
message
to
the
target,
which
might
result
in
a
TCP
SYN
flood
attack at the target).
28.
Trudy
can
pretend
to
be
Bob
to
Alice
(and
vice-versa)
and
partially
or
completely
modify
the
message(s)
being
sent
from
Bob
to
Alice.
For
example,
she
can
easily
change
the
phrase
“Alice,
I
owe
you
$$1000”
to
“Alice,
I
owe
you
$$10,000”.
Furthermore,
Trudy
can
even
drop
the
packets
that
are
being
sent
by
Bob
to
Alice
(and vise-versa), even
if the packets from Bob to Alice are encrypted.
Chapter 1
Problems
Problem 1
There
is
no
single
right
answer
to
this
question.
Many
protocols
would
do
the
trick.
Here's a simple answer below:
Messages from ATM machine
to Server
Msg name
--------
HELO
purpose
-------
Let
server
know
that
there
is
a
card
in
the
ATM machine
ATM
card transmits user ID to Server
PASSWD
User enters PIN, which is sent
to server
BALANCE
User
requests balance
WITHDRAWL
User asks to withdraw money
BYE
user all done
Messages from Server to ATM
machine (display)
Msg name
--------
PASSWD
OK
ERR
AMOUNT
BYE
purpose
-------
Ask user for PIN
(password)
last
requested
operation
(PASSWD,
WITHDRAWL)
OK
last
requested
operation
(PASSWD,
WITHDRAWL)
in ERROR
sent in response to BALANCE request
user done, display welcome screen at
ATM
Correct operation:
client
server
HELO (userid)
-------------->
(check if
valid userid)
<-------------
PASSWD
PASSWD
-------------->
(check
password)
<-------------
OK (password is OK)
BALANCE
-------------->
<-------------
AMOUNT
WITHDRAWL
-------------->
check if
enough $$ to cover
withdrawl
<-------------
OK
ATM dispenses $$
BYE
-------------->
<-------------
BYE
In situation when there's
not enough money:
HELO (userid)
-------------->
(check if
valid userid)
<-------------
PASSWD
PASSWD
-------------->
(check
password)
<-------------
OK (password is OK)
BALANCE
-------------->
<-------------
AMOUNT
WITHDRAWL
-------------->
check
if
enough
$$
to
cover
withdrawl
<-------------
ERR (not
enough funds)
error msg displayed
no $$ given out
BYE
-------------->
<-------------
BYE
Problem 2
At time N*(L/R)
the first packet has reached the destination, the
second packet is stored
in the last
router, the third packet is stored in the next-to-
last router, etc. At time N*(L/R)
+
L/R, the second packet has reached the
destination, the third packet is stored in the
last
router,
etc.
Continuing
with
this
logic,
we
see
that
at
time
N*(L/R)
+
(P-1)*(L/R)
=
(N+P-1)*(L/R) all packets have reached
the destination.
Problem 3
a)
A
circuit-switched
network
would
be
well
suited
to
the
application,
because
the
application
involves
long
sessions
with
predictable
smooth
bandwidth
requirements.
Since
the
transmission
rate
is
known
and
not
bursty,
bandwidth
can
be
reserved
for
each
application session without significant waste. In
addition, the overhead costs of
setting
up and tearing down connections are amortized over
the lengthy duration of a
typical
application session.
b)
In
the
worst
case,
all
the
applications
simultaneously
transmit
over
one
or
more
network
links. However, since each link has sufficient
bandwidth to handle the sum
of
all
of
the
applications'
data
rates,
no
congestion
(very
little
queuing)
will
occur.
Given
such
generous
link
capacities,
the
network
does
not
need
congestion
control
mechanisms.
Problem 4
a)
Between the switch in the upper left
and the switch in the upper right we can have 4
connections.
Similarly
we
can
have
four
connections
between
each
of
the
3
other
pairs of adjacent switches. Thus, this
network can support up to 16 connections.
b)
We can
4
connections passing through the switch in the
upper-right-hand corner and
another
4
connections
passing
through
the
switch
in
the
lower-left-hand
corner,
giving a total of
8
connections.
c)
Yes. For the connections between A and
C, we route two connections through B and
two
connections
through
D.
For
the
connections
between
B
and
D,
we
route
two
connections through A
and two connections through C. In this manner,
there are at
most 4 connections passing
through any link.
Problem 5
Tollbooths are 75 km apart,
and the cars propagate at 100km/hr. A tollbooth
services a
car at a rate of one car
every 12 seconds.
a) There
are ten cars. It takes 120 seconds, or 2 minutes,
for the first tollbooth to service
the
10 cars. Each of these cars has a propagation
delay of 45 minutes (travel 75 km)
before
arriving
at
the
second
tollbooth.
Thus,
all
the
cars
are
lined
up
before
the
second
tollbooth
after
47
minutes.
The
whole
process
repeats
itself
for
traveling
between the second and third
tollbooths. It also takes 2 minutes for the third
tollbooth
to service the 10 cars. Thus
the total delay is 96 minutes.
b)
Delay
between
tollbooths
is
8*12
seconds
plus
45
minutes,
i.e.,
46
minutes
and
36
seconds. The total delay
is twice this amount plus 8*12 seconds, i.e., 94
minutes and
48 seconds.
Problem 6
a)
d
prop
?
m
/
s
seconds.
b)
d
tran
s
?
L
/
R<
/p>
seconds.
c)
d
end
?
to
?
end
?
(
m
/
s
?
L
/
R
)
seconds.
d) The bit is just leaving
Host A.
e) The first bit is in the link
and has not reached Host B.
f) The
first bit has reached Host B.
g) Want <
/p>
L
120
?
m<
/p>
?
s
?
2
.
5
?
10
8
?
?
536
km.
3
R
56
p>
?
10
Problem 7
Consider the first bit in a packet.
Before this bit can be transmitted, all of the
bits in the
packet must be generated.
This requires
56
?
8
sec=7msec.
3
64
?
10
The time required to transmit the
packet is
56
?
8
sec=
p>
224
?
sec.
2
?
10
6
Propagation delay = 10 msec.
The delay until decoding is
7msec
+
224
?
sec + 10msec
= 17.224msec
A similar
analysis shows that all bits experience a delay of
17.224 msec.
Problem 8
a) 20 users can be
supported.
b)
p
?<
/p>
0
.
1
.
p>
?
120
?
n
p>
120
?
n
?
p>
c)
?
.
?<
/p>
?
p
1
?
p
?
n
?
?
?
20
120
?
?
n
120
?
n
?
d)
1
?
?
?
.
?
?
p
1
?
p
?
n
?
n
?
0
?
?
We use the
central limit theorem to approximate this
probability. Let
X
j
be independent
random variables such that
P
?
X
j
?
p>
1
?
?
p
.
?
120
?
?
P
?
“21
or more users”
?
?<
/p>
1
?
P
?
X
?
21
?
?
j
?
?
j
?
1
?
< br>
?
?
120
< br>X
j
?
12
?
?
120
?
9
?
?
j
?
1
?
P
?<
/p>
X
?
21
?
p>
P
?
?
?
?
?
j
?
120
?
0
.
1
?
0
< br>.
9
?
?
120
?
0
.
1
?
0
.
9
?
j
?
1
p>
?
?
?
9
?
?
?
P
?
Z
?
?
?
P
?
Z
?
2
.
74
?
3
.
286
?
?
?
0
.
997
when
Z
is a standard normal r.v.
Thus
P
?
“
21
or more users”
?<
/p>
?
0
.
003<
/p>
.
Problem 9
a)
10,000
M
?
M
?
n
M
?
n
?
b)
?
?
?
p>
?
p
1
?
p
?
n
?
n
?
N
?
< br>1
?
?
Problem 10
The first
end
system requires
L/R
1
to transmit
the packet onto the first link; the packet
propagates over the first link in
d
1
/s
1
; the packet switch adds a processing delay
of
d
proc
;
after
receiving
the
entire
packet,
the
packet
switch
connecting
the
first
and
the
second
link requires
L/R
2
to transmit
the packet onto the second link; the packet
propagates over
the second link in
d
2
/s
2
. Similarly, we can find the delay caused by the second switch and
the third link:
L/R
3
,
d
proc
, and
d
3
/s
3
.
Adding these five delays gives
d
end-
end
= L/R
1
+
L/R
2
+
L/R
3
+
d
1
/s
1
+
d
2
/s
2
+
d
3
/s
3
+
d
proc
+
d
proc
To answer the second question, we
simply plug the values into the equation to get 6
+ 6 +
6 + 20+16 + 4 + 3 + 3 = 64 msec.
Problem 11
Because bits are immediately
transmitted, the packet switch does not introduce
any delay;
in particular, it does not
introduce a transmission delay. Thus,
d
end-end
= L/R +
d
1
/s
1
+
d
2
/s
2
+
d
3
/s
3
For the values in Problem
10, we get 6 + 20 + 16 + 4 = 46 msec.
Problem 12
The
arriving packet must first wait for the link to
transmit 4.5 *1,500 bytes = 6,750 bytes
or 54,000 bits. Since these bits are
transmitted at 2 Mbps, the queuing delay is 27
msec.
Generally, the queuing delay is
(
nL
+
(
L
-
x
))/
R
.
Problem 13
a)
The queuing
delay is 0 for the first transmitted packet,
L/R
for the second
transmitted
packet, and generally,
(n-1)L/R
for the
n
th
transmitted
packet. Thus, the average delay
for the
N
packets is:
(L/R + 2L/R + ....... + (N-1)L/R)/N
= L/(RN) * (1 + 2 + ..... + (N-1))
= L/(RN) * N(N-1)/2
=
LN(N-1)/(2RN)
= (N-1)L/(2R)
Note that here we used the well-known
fact:
1 + 2 + ....... + N =
N(N+1)/2
b)
It takes
LN
/
R
seconds to transmit the
N
packets. Thus, the buffer is empty when a
each batch of
N
packets arrive. Thus, the average delay of a
packet across all batches
is the
average delay within one batch, i.e., (
N-
1)
L
/
2R
.
Problem 14
a)
The
transmission delay is
L
/
R
.
The total delay is
IL
L
L
/
R
?
< br>?
R
(
1
?
I
)
R
1
?
I
b)<
/p>
Let
x
?
L
/
R
.
x
Total delay =
<
/p>
1
?
ax
For
x=0,
the
total
delay
=0;
as
we
increase
x,
total
delay
increases,
approaching
infinity as x approaches 1/a.
Problem 15
Total delay
?
L
/
R
L
/
R
1
/
?
1
?
?
?
.
1
?
I
1
?
aL
/
R
1
?
p>
a
/
?
?
?
a
Problem 16
The total number of packets
in the system includes those in the buffer and the
packet that
is being transmitted. So,
N=10+1.
Because
N
?
a
?
d<
/p>
, so (10+1)=a*(queuing delay +
transmission delay). That is,
11=a*(0.01+1/100)=a*(0.01+0.01). Thus,
a=550 packets/sec.
Problem 17
q
a)
There
are
Q
nodes
(the
source
host
and
the
Q
?
1
routers).
Let
d
proc
denote
the
processing delay at the
q
th node. Let
R
q
be the
transmission rate of the
q
th
link and
let
q
q
be the
propagation delay across the
q
th link. Then
d<
/p>
trans
?
L
/
R
q
. Let
d
prop
q
q
q
d
en
d
?
to
?
e
nd
?
?
d
p
roc
?
d
trans
?
d
prop
.
q
?
1
Q
?
?
q
b)
Let
d
queue
denote the
average queuing delay at node
q
. Then
q
< br>q
q
q
d
end
?
to
?
end
?
?
d
proc
?
d
trans
?
d
prop
?
d
queue
.
q
?
1
Q
?
p>
?
Problem 18
On linux you can use the
command
traceroute
and in the Windows command
prompt you can use
tracert
In either case, you will
get three delay measurements. For those three
measurements you
can calculate the mean
and standard deviation. Repeat the experiment at
different times
of the day and comment
on any changes
.
Here is an example solution:
Traceroutes between San Diego Super
Computer Center and
a)
The average
(mean) of the round-trip delays at each of the
three hours is 71.18 ms,
71.38 ms and
71.55 ms, respectively. The standard deviations
are 0.075 ms, 0.21 ms,
0.05 ms,
respectively.
b)
In this example, the traceroutes have
12 routers in the path at each of the three hours.
No, the paths didn’t change during any
of the hours.
c)
Traceroute
packets passed through four ISP networks from
source to destination. Yes,
in this
experiment the largest delays occurred at peering
interfaces between adjacent
ISPs.
Traceroutes
from (France) to (USA).
d)
The average
round-trip delays at each of the three hours are
87.09 ms, 86.35 ms and
86.48
ms,
respectively.
The
standard
deviations
are
0.53
ms,
0.18
ms,
0.23
ms,
respectively.
In
this
example,
there
are
11
routers
in
the
path
at
each
of
the
three
hours. No, the paths didn’t change
during any of the
hours. Traceroute
packets passed
three
ISP
networks
from
source
to
destination.
Yes,
in
this
experiment
the
largest
delays occurred at
peering interfaces between adjacent ISPs.
Problem
19
An example solution:
Traceroutes
from two different cities in France to New York
City in United States
a)
In these
traceroutes from two different cities in France to
the same destination host in
United
States, seven links are in common including the
transatlantic link.
b)
In
this
example
of
traceroutes
from
one
city
in
France
and
from
another
city
in
Germany to the same host
in United States, three links are in common
including the
transatlantic link.
Traceroutes to two different cities in
China from same host in United States
c)
Five
links
are
common
in
the
two
traceroutes.
The
two
traceroutes
diverge
before
reaching China
Problem 20
Throughput =
min{R
s
,
R
c
, R/M}
Problem 21
If
only use one path, the max throughput is given by:
1
1
2
2
p>
M
M
m
ax
{m
in{
R
1
p>
,
R
2
,
?
,
R
1
in{
R
1
2
,
R
2
,
< br>?
,
R
N
},
?
,
m
in{
R
1
M
,
R
2
,
?<
/p>
,
R
N
}}
p>
.
N
},
m
<
/p>
k
k
,
?
,
R
N
}
.
If use all paths, the max
throughput is given by
?
min{
R
1
k
,
p>
R
2
k
?
1
M
Problem
22
Probability of
successfully receiving a packet is:
p
s
=
(1-p)
N
.
The
number
of
transmissions
needed
to
be
performed
until
the
packet
is
successfully
received by the client is a geometric
random variable with success probability
p
s
. Thus,
the
average number of transmissions needed is given
by: 1/p
s
. Then, the average
number
of re-transmissions needed is
given by: 1/p
s
-1.
Problem 23
Let’s
call the first packet
A and call the second packet B.
a)
If the
bottleneck link is the first link, then packet B
is queued at the first link waiting
for
the transmission of packet A. So the packet inter-
arrival time at the destination is
simply
L/R
s
.
b)
If
the second link is the bottleneck link and both
packets are sent back to back, it must
be true that the second packet arrives
at the input queue of the second link before the
second link finishes the transmission
of the first packet. That is,
L/R
s
+
L/R
s
+
d
prop
<
L/R
s
+
d
prop
+
L/R
c
The left hand side of the above
inequality represents the time needed by the
second
packet to
arrive at
the input queue of the second link (the
second link has not started
transmitting the second
packet yet). The right hand side represents the
time needed by
the first packet to
finish its transmission onto the second link.
If we send the second
packet
T
seconds later, we
will ensure that there is no queuing
delay for the second packet at the
second link if we have:
L/R
s
+
L/R
s
+
d
prop
+ T >=
L/R
s
+
d
prop
+ L/R
c
Thus, the minimum value of
T is
L/R
c
?
L/R
s
.
Problem 24
40 terabytes = 40 *
10
12
* 8 bits. So, if using
the dedicated link, it will take 40 *
10
12
* 8 /
(100
*10
6
) =3200000 seconds = 37
days. But with FedEx overnight delivery, you can
guarantee the data arrives in one day,
and it should cost less than $$100.
Problem 25
a)
160,000 bits
b)
160,000 bits
c)
The bandwidth-
delay product of a link is the maximum number of
bits that can be in
the link.
d)
the width of a
bit = length of link / bandwidth-delay product, so
1 bit is 125 meters
long, which is
longer than a football field
e)
s/R
Problem 26
s
/
R
=20000k
m, then
R
=
s
/20000km= 2.5*10
8
/(2*10
7
)= 12.5 bps
Problem 27
a)
80,000,000
bits
b)
800,000
bits, this is because that the maximum number of
bits that will be in the link
at any
given time = min(bandwidth delay product, packet
size) = 800,000 bits.
c)
.25 meters
Problem 28
a)
t
trans
+
t
prop
= 400 msec + 80 msec =
480 msec.
b)
20 *
(
t
trans
+ 2
t
prop
) = 20*(20 msec + 80
msec) = 2 sec.
c)
Breaking
up
a
file
takes
longer
to
transmit
because
each
data
packet
and
its
corresponding acknowledgement packet
add their own propagation delays.
Problem 29
Recall geostationary satellite is
36,000 kilometers away from earth surface.
a)
150 msec
b)
1,500,000 bits
c)
600,000,000
bits
Problem 30
Let’s suppose the passenger
and his/her bags correspond to the data unit
arriving to the
top of the protocol
stack. When the passenger checks in, his/her bags
are checked, and a
tag
is
attached
to
the
bags
and
ticket.
This
is
additional
information
added
in
the
Baggage layer if Figure 1.20 that
allows the Baggage layer to implement the service
or
separating
the
passengers
and
baggage
on
the
sending
side,
and
then
reuniting
them
(hopefully!) on the
destination side. When a passenger then passes
through security and
additional stamp
is often added to his/her ticket, indicating that
the passenger has passed
through a
security check. This information is used to
ensure (e.g., by later checks for the
security information) secure transfer
of people.
Problem 31
a)
8
?<
/p>
10
6
sec
?
4
sec
Time
to
send
message
from
source
host
to
first
packet
switch
=
6
2
?
10
With
store-and-forward switching, the total time to
move message from source host
to
destination host =
4
sec
?
3
hops
?
12
sec
Time
to
send
1
st
packet
from
source
host
to
first
packet
switch
=
.
1
?
10
4
sec
?
5
m
sec
. Time at which
2
nd
packet is received at
the first switch = time
6
2
p>
?
10
at which
1
st
packet is received at
the second switch =
2
?
5
m
sec
?
10
m
sec
Time
at
which
1
st
packet
is
received
at
the
destination
host
=
5
m
s
ec
?
3
hops
?
15
m
sec
< br>. After this, every 5msec one packet will be received; thus
time
at
which
last
(800
th
)
packet
is
received
=
15
m
sec
?
799
p>
*
5
m
sec
p>
?
4
.
01
sec
.
It
can
be
seen
that
delay
in
using
message
segmentation
is
significantly
less
(almost
1/3
rd
).
i.
Without
message
segmentation,
if
bit
errors
are
not
tolerated,
if
there
is
a
single bit error, the whole message has
to be retransmitted (rather than a single
packet).
ii.
Without
message
segmentation,
huge
packets
(containing
HD
videos,
for
example) are sent into the network.
Routers have to accommodate these huge
b)
c)
d)
packets. Smaller packets have to queue
behind enormous packets and suffer
unfair delays.
e)
i.
ii.
Packets have to be put in
sequence at the destination.
Message
segmentation
results
in
many
smaller
packets.
Since
header
size
is
usually
the
same
for
all
packets
regardless
of
their
size,
with
message
segmentation the
total amount of header bytes is more.
Problem 32
Yes,
the delays in the applet correspond to the delays
in the Problem propagation
delays
affect
the
overall
end-to-end
delays
both
for
packet
switching
and
message
switching equally.
Problem 33
There are
F
/
S
packets. Each packet is S=80 bits. Time at which
the last packet is received
S
?
80
F
at
the
first
router
is
?
sec.
At
this
time,
the
first
F/S-2
packets
are
at
the
R
S
destination,
and
the
F/S-1
packet
is
at
the
second
router.
The
last
packet
must
then
be
transmitted
by
the
first
router
and
the
second
router,
with
each
transmission
taking
< br>S
?
80
S
?
80
F
sec. Thus
delay in sending the whole file is
delay
?
?
(
?
p>
2
)
R
R
S
To
calculate the value of S which leads to the
minimum delay,
d
delay
?
0
?
S
?
40
F
dS
Problem 34
The
circuit-
switched
telephone
networks
and
the
Internet
are
connected
together
at
circuit is established
between a gateway and the telephone user over the
circuit switched
network. The skype
user's voice is sent in packets over the Internet
to the gateway. At the
gateway,
the
voice
signal
is
reconstructed
and
then
sent
over
the
circuit.
In
the
other
direction, the voice signal is sent
over the circuit switched network to the gateway.
The
gateway packetizes the voice signal
and sends the voice packets to the Skype user.
Chapter 2
Review Questions
1.
The Web: HTTP;
file transfer: FTP; remote login: Telnet; e-mail:
SMTP; BitTorrent
file sharing:
BitTorrent protocol
2.
Network
architecture
refers
to
the
organization
of
the
communication
process
into
layers (e.g., the five-layer Internet
architecture). Application architecture, on the
other
hand, is designed by an
application developer and dictates the broad
structure of the
application (e.g.,
client-server or P2P).
3.
The process
which initiates the communication is the client;
the process that waits to
be contacted
is the server.
4.
No. In a P2P
file-sharing application, the peer that is
receiving a file is typically the
client and the peer that is sending the
file is typically the server.
5.
The
IP
address
of
the
destination
host
and
the
port
number
of
the
socket
in
the
destination process.
6.
You would use UDP. With UDP, the
transaction can be completed in one roundtrip
time (RTT) - the client sends the
transaction request into a UDP socket, and the
server
sends the reply back to the
client's UDP socket. With TCP, a minimum of two
RTTs
are needed - one to set-up the TCP
connection, and another for the client to send the
request, and for the server to send
back the reply.
7.
One
such
example
is
remote
word
processing,
for
example,
with
Google
docs.
However, because Google docs runs over
the Internet (using TCP), timing guarantees
are not provided.
8.
a)
Reliable data transfer
TCP provides
a reliable byte-stream between client and server
but UDP does not.
b) A
guarantee that a certain value for throughput will
be maintained
Neither
c) A guarantee that data will be
delivered within a specified amount of time
Neither
d)
Confidentiality (via encryption)
Neither
9.
SSL operates at the application layer.
The SSL socket takes unencrypted data from
the
application
layer,
encrypts
it
and
then
passes
it
to
the
TCP
socket.
If
the
application
developer
wants
TCP
to
be
enhanced
with
SSL,
she
has
to
include
the
SSL code in the application.
10.
A protocol
uses handshaking if the two communicating entities
first exchange control
packets before
sending data to each other. SMTP uses handshaking
at the application
layer whereas HTTP
does not.
11.
The applications associated with those
protocols require that all application data be
received
in
the
correct
order
and
without
gaps.
TCP
provides
this
service
whereas
UDP does not.
12.
When the user
first visits the site,
the
server creates a unique identification number,
creates an entry in its back-end
database, and returns this identification number
as a
cookie number. This cookie number
is stored on the user’s host and is managed by
the
browser.
During
each
subsequent
visit
(and
purchase),
the
browser
sends
the
cookie number back to
the site. Thus the site knows when this user (more
precisely,
this browser) is visiting
the site.
13.
Web caching
can bring the desired content “closer” to the
user, p
ossibly to the same
LAN to which the user’s hos
t
is connected. Web caching can reduce the delay for
all
objects, even objects that are not
cached, since caching reduces the traffic on
links.
14.
Telnet is not available in Windows 7 by
default. to make it available, go to Control
Panel,
Programs
and
Features,
Turn
Windows
Features
On
or
Off,
Check
Telnet
client. To start Telnet, in Windows
command prompt, issue the following command
> telnet webserverver 80
where
is
some
webserver.
After
issuing
the
command,
you
have
established a TCP connection between
your client telnet program and the web server.
Then type in an HTTP GET message. An
example is given below:
Since the page in this web server was
not modified since Fri, 18 May 2007
09:23:34
GMT,
and
the
above
commands
were
issued
on
Sat,
19
May
2007,
the
server returned
and header lines inputed by the user,
and the next 4 lines (starting from HTTP/1.1 304
Not Modified) is the response from the
web server.
15.
A
list of several popular messaging apps: WhatsApp,
Facebook Messenger, WeChat,
and
Snapchat. These apps use the different protocols
than SMS.
16.
The
message
is
first
sent
from
Alice’s
host
to
her
mail
server
over
HTTP.
Alice’s
mail
server
then
sends
the
message
to
Bob’s
mail
server
over
SMTP.
Bob
then
transfers the message
from his mail server to his host over POP3.
17.
from
65.54.246.203
(EHLO
)
Received:
(65.54.246.203)
by
with
SMTP;
Sat,
19
May 2007
16:53:51 -0700
from
([65.55.135.106])
by
Received:
with
Microsoft SMTPSVC(6.0.3790.2668); Sat, 19 May 2007
16:52:42 -
0700
from mail
pickup service by with Microsoft SMTPSVC; Sat,
Received:
19 May 2007
16:52:41 -0700
Message-ID:
from
65.55.135.123
by
with
HTTP;
Received:
Sat, 19 May 2007 23:52:36 GMT
From:
To:
prithula@
Bcc:
Subject:
Test mail
Date:
Sat, 19 May 2007
23:52:36 +0000
Mime-Version:
1
.0
Content-Type:
Text/html; format=flowed
Return-Path:
prithuladhungel@
Figure: A sample mail message
header
Received: This
header field indicates the sequence in which the
SMTP servers send
and receive the
mail message including the respective timestamps.
In this example there are 4
“Received:” header lines. This means
the mail message
passed through 5
different SMTP servers before being delivered to
the receiver’s mail
box.
The
last
(forth)
“Received:”
header
indicates
the
mail
message
flow
from
the
SMTP
server of the sender to the second SMTP server in
the chain of servers. The
sender’s SMTP
server is at address 65.55.135.123 and the second
SMTP server in the
chain is .
The third “Received:” header indicates
the mail message flow from the second SMTP
server in the chain to the third
server, and so on.
Finally, the first
“Received:” header indicates the flow of the mail
message
s from the
forth
SMTP
server
to
the
last
SMTP
server
(i.e.
the
receiver’s
mail
server)
in
the
chain.
Message-id:
The
message
has
been
given
this
number
BAY130-
F26D9E35BF59E0D18A819
AFB9310@
(by
bay0-omc3-
. Message-id is a
unique string assigned by the mail system when
the message is first created.
From: This indicates the
email address of the sender of the mail. In the
given example,
the sender is
“prithuladhungel@”
To: This field indicates the email
address of the receiver of the mail. In the
example,
the receiver is
“prithula@”
Subject:
This
gives
the
subject
of
the
mail
(if
any
specified
by
the
sender).
In
the
example,
the subject specified by the sender is “Test
mail”
Date: The
date and time when the mail was sent by the
sender. In the example, the
sender sent
the mail on 19th May 2007, at time 23:52:36 GMT.
Mime-version: MIME version
used for the mail. In the example, it is 1.0.
Content-type: The type of
content in the body of the mail message. In the
example, it
is “text/html”.
Return-Path: This
specifies
the email
address
to
which
the mail
will be sent
if the
receiver of th
is mail wants
to reply to the sender. This is also used by the
sender’s
mail
server for
bouncing back undeliverable mail
messages
of mailer-daemon
error
messages. In the
example, the return path is
“prithuladhungel@”.
18.
With download
and delete, after a user retrieves its messages
from a POP server, the
messages are
deleted. This poses a problem for the nomadic
user, who may want to
access the
messages from many different machines (office PC,
home PC, etc.). In the
download and
keep configuration, messages are not deleted after
the user retrieves the
messages.
This
can
also
be
inconvenient,
as
each
time
the
user
retrieves
the
stored
messages from a new
machine, all of non-deleted messages will be
transferred to the
new machine
(including very old messages).
19.
Yes an
organization’s mail server and Web server can have
the same alias for a host
name. The MX
record is used to map the mail server’s host name
to its IP address.
20.
You
should
be
able
to
see
the
sender's
IP
address
for
a
user
with
an .edu
email
address. But you will
not be able to see the sender's IP address if the
user uses a gmail
account.
21.
It is not
necessary that Bob will also provide chunks to
Alice. Alice has to be in the
top 4
neighbors of Bob for Bob to send out chunks to
her; this might not occur even if
Alice
provides chunks to Bob throughout a 30-second
interval.
22.
Recall that
in BitTorrent, a peer picks a random peer and
optimistically unchokes the
peer
for
a
short
period
of
time.
Therefore,
Alice
will
eventually
be
optimistically
unchoked by one of her neighbors,
during which time she will receive chunks from
that neighbor.
23.
The overlay
network in a P2P file sharing system consists of
the nodes participating
in the file
sharing system and the logical links between the
nodes. There is a logical
link
(an
“edge”
in
graph
theory
terms)
from
node
A
to
node
B
if
there
is
a
semi
-
permanent
TCP connection between A and B. An overlay network
does not include
routers.
24.
One
server
placement
philosophy
is
called
Enter
Deep,
which
enter
deep
into
the
access networks of Internet Service
Providers, by deploying server clusters in access
ISPs
all
over
the
world.
The
goal
is
to
reduce
delays
and
increase
throughput
between end users
and the CDN servers. Another philosophy is Bring
Home, which
bring the
ISPs
home by building
large CDN server
clusters at
a smaller number of
sites
and typically placing
these server clusters
in
IXPs
(Internet
Exchange Points).
This Bring
Home design typically results in lower maintenance
and management cost,
compared with the
enter-deep design philosophy.
25.
Other than
network-related factors, there are some important
factors to consider, such
as load-
balancing (clients should not be directed to
overload clusters), diurnal effects,
variations
across
DNS
servers
within
a
network,
limited
availability
of
rarely
accessed video, and the need to
alleviate hot-spots that may arise due to popular
video
content.
Reference
paper:
Torres,
Ruben,
et
al.
video
server
selection
strategies
in
the
YouTube
CDN.
(ICDCS), 2011.
Another factor to consider
is ISP delivery cost
–
the
clusters may be chosen so that
specific
ISPs are used to carry CDN-to-client traffic,
taking into account the different
cost
structures in the contractual relationships
between ISPs and cluster operators.
26.
With the UDP
server, there is no welcoming socket, and all data
from different clients
enters the
server through this one socket. With the TCP
server, there is a welcoming
socket,
and
each
time
a
client
initiates
a
connection
to
the
server,
a
new
socket
is
created.
Thus,
to
support
n
simultaneous
connections,
the
server
would
need
n+1
sockets.
27.
For the TCP
application, as soon as the client is executed, it
attempts to initiate a TCP
connection
with the server. If the TCP server is not running,
then the client will fail to
make a
connection. For the UDP application, the client
does not initiate connections
(or
attempt to communicate with the UDP server)
immediately upon execution
Chapter 2
Problems
Problem 1
a) F
b) T
c) F
d) F
e) F
Problem
2
SMS (Short Message
Service) is a technology that allows the sending
and receiving of
text
messages
between
mobile
phones
over
cellular
networks.
One
SMS
message
can
contain data of 140 bytes and it
supports languages internationally. The maximum
size of
a message can be 160 7-bit
characters, 140 8-bit characters, or 70 16-bit
characters. SMS
is
realized
through
the
Mobile
Application
Part
(MAP)
of
the
SS#7
protocol,
and
the
Short Message protocol is defined by
3GPP TS 23.040 and 3GPP TS 23.041. In addition,
MMS (Multimedia Messaging Service)
extends the capability of original text messages,
and support sending photos, longer text
messages, and other content.
iMessage is an instant messenger
service developed by Apple. iMessage supports
texts,
photos, audios or videos that we
send to iOS devices and Macs over cellular data
network
or WiFi. Apple’s
i
Message is based on a proprietary,
binary protocol APNs (Apple Push
Notification Service).
WhatsApp
Messenger
is
an
instant
messenger
service
that
supports
many
mobile
platforms
such
as
iOS,
Android,
Mobile
Phone,
and
Blackberry.
WhatsApp
users
can
send
each other unlimited images, texts, audios, or
videos over cellular data network or
WiFi. WhatsApp uses the XMPP protocol
(Extensible Messaging and Presence Protocol).
iMessage
and
WhatsApp
are
different
than
SMS
because
they
use
data
plan
to
send
messages
and they work on TCP/IP networks, but SMS use the
text messaging plan we
purchase from
our wireless carrier. Moreover, iMessage and
WhatsApp support sending
photos,
videos, files,
etc., while the original
SMS can only send text
message. Finally,
iMessage
and WhatsApp can work via WiFi, but SMS cannot.
Problem 3
Application layer protocols: DNS and
HTTP
Transport layer protocols: UDP for
DNS; TCP for HTTP
Problem 4
a)
The
document
request
was
/cs453/.
The
Host
:
field
indicates the server's name and /cs453/ indicates
the file name.
b)
The browser is
running HTTP version 1.1, as indicated just before
the first
pair.
c)
The
browser
is
requesting
a
persistent
connection,
as
indicated
by
the
Connection:
keep-alive.
d)
This
is
a
trick
question.
This
information
is
not
contained
in
an
HTTP
message
anywhere.
So
there
is
no
way
to
tell
this
from
looking
at
the
exchange
of
HTTP
messages
alone. One would need information from the IP
datagrams (that carried the
TCP segment
that carried the HTTP GET request) to answer this
question.
e)
Mozilla/5.0. The browser type
information is needed by the server to send
different
versions of the same object
to different types of browsers.
Problem 5
a)
The status
code of 200 and the phrase OK indicate that the
server was able to locate
the
document
successfully.
The
reply
was
provided
on
Tuesday,
07
Mar
2008
12:39:45 Greenwich Mean
Time.
b)
The document was last modified on
Saturday 10 Dec 2005 18:27:46 GMT.
c)
There are 3874
bytes in the document being returned.
d)
The
first
five
bytes
of
the
returned
document
are
:
The
server
agreed
to
a
persistent connection, as indicated by
the Connection: Keep-Alive field
Problem 6
a)
Persistent
connections are discussed in section 8 of RFC 2616
(the real goal of this
question was to
get you to retrieve and read an RFC). Sections
8.1.2 and 8.1.2.1 of
the RFC indicate
that either the client or the server can indicate
to the other that it is
going to close
the persistent connection. It does so by
including the connection-token
b)
HTTP does not provide any encryption
services.
c)
(From R
FC 2616) “Clients
that use persistent connections should limit the
number of
simultaneous
connections
that
they
maintain
to
a
given
server.
A
single-user
client
SHOULD NOT maintain
more than 2 connections with any server or
proxy.”
d)
Yes. (From RFC 2616) “A
client might have started to send a new
request at the same
time that the
server has decided to close the
of view, the connection is
being closed while it was idle, but from the
client's point of
view, a request
i
s in progress.”
Problem 7
The
total amount of time to get the IP address is
RTT
1
?
RTT
p>
2
?
?
?
RTT
n
.
Once
the IP address is known,
RTT
O
elapses to
set up the TCP connection and another
RTT
O
elapses to
request and receive the small object. The total
response time is
2
RTT
o
?
RTT
1
?
RTT
2
?
?
?
RTT
n
Problem 8
a)
RT
T
1
?
?
?<
/p>
RTT
n
?
2<
/p>
RTT
o
?
8<
/p>
?
2
RTT
o<
/p>
?
18
RTT
o
?
RTT
1
?
?
?
RTT
n
.
b)
RTT
1
?
?
?
RTT
n
?
2
RTT
o
?
2
?
2
p>
RTT
o
?
p>
6
RTT
o
?
p>
RTT
1
?
?
p>
?
RTT
n
c)
Persistent connection with pipelining.
This is the default mode of HTTP.
RTT
p>
1
?
?
?
RTT
n
?
2
RTT
o
?
RTT
p>
o
?
3
RTT
o
?
RTT
1
?
?
?
RTT
n
.
Persistent connection without
pipelining, without parallel connections.
RTT
1
?
?
?
RTT
n
?
2
RTT
o
?
8
RTT
o
?
10
RT
T
o
?
RTT
1
?
?
?
RT
T
n
.
Problem 9
a)
The time to
transmit an object of size
L
over a link or rate
R
is
L/R
. The average time
is the average size of the object
divided by
R
:
?
= (850,000
bits)/(15,000,000 bits/sec) = .0567 sec
The traffic intensity on
the link is given by
?
?
=(16
requests/sec)(.0567 sec/request) =
0.907. Thus, the average access delay
is (.0567 sec)/(1 - .907)
?
.6 seconds. The total
average response
time is therefore .6 sec + 3 sec = 3.6 sec.
b)
The
traffic
intensity
on
the
access
link
is
reduced
by
60%
since
the
60%
of
the
requests are satisfied
within the institutional network. Thus the average
access delay
is
(.0567
sec)/[1
–
(.4)(.907)]
= .089
seconds.
The
response
time
is
approximately
zero if the
request is satisfied by the cache (which happens
with probability .6); the
average
response
time
is
.089
sec
+
3
sec
=
3.089
sec
for
cache
misses
(which
happens 40% of the time). So the
average response time is (.6)(0 sec) + (.4)(3.089
sec)
= 1.24 seconds. Thus the average
response time is reduced from 3.6 sec to 1.24 sec.
Problem 10
Note
that
each
downloaded
object
can
be
completely
put
into
one
data
packet.
Let
Tp
denote the one-way
propagation delay between the client and the
server.
First
consider
parallel
downloads
using
non-persistent
connections.
Parallel
downloads
would
allow
10
connections
to
share
the
150
bits/sec
bandwidth,
giving
each
just
15
bits/sec. Thus, the total
time needed to receive all objects is given by:
(200/150+
T
p + 200/150
+
T
p +
200/150+
T
p + 100,000/150+
T
p )
+
(200/(150/10)+
T
p +
200/(150/10) +
T
p +
200/(150/10)+
T
p +
100,000/(150/10)+
T
p )
= 7377 + 8*
T
p
(seconds)
Now consider a
persistent HTTP connection. The total time needed
is given by:
(200/150+
T
p +
200/150 +
T
p +
200/150+
T
p + 100,000/150+
T
p )
+
10*(200/150+
T
p +
100,000/150+
T
p )
=7351 + 24*
T
p
(seconds)
Assuming the
speed of light is 300*10
6
m/sec, then
Tp=10/(300*10
6
)=0.03
microsec. Tp
is therefore negligible
compared with transmission delay.
Thus, we see that persistent HTTP is
not significantly faster (less than 1 percent)
than the
non-persistent case with
parallel download.
Problem
11
a)
Yes,
because
Bob
has
more
connections,
he
can
get
a
larger
share
of
the
link
bandwidth.
b)
Yes,
Bob
still
needs
to
perform
parallel
downloads;
otherwise
he
will
get
less
bandwidth than the other four users.
Problem 12
from socket import *
serverPort=12000
serverSocke
t=socket(AF_INET,SOCK_STREAM)
(('',serverPort))
(1)
connectionSocket, addr = ()
while 1:
sentence =
(1024)
print
'From
Server:',
sentence,
()
'n'
Problem 13
The MAIL FROM: in SMTP is a
message from the SMTP client that identifies the
sender
of the mail message to the SMTP
server. The From: on the mail message itself is
NOT an
SMTP message, but rather is just
a line in the body of the mail message.
Problem 14
SMTP uses a line containing only a
period to mark the end of a message body.
HTTP uses “
Content-Length
header field
” to indicate the length of
a message body.
No,
HTTP
cannot
use
the
method
used
by
SMTP,
because
HTTP
message
could
be
binary
data, whereas in SMTP, the message body must be in
7-bit ASCII format.
Problem
15
MTA stands for Mail
Transfer Agent. A host sends the message to an
MTA. The message
then
follows a sequence of MTAs to
reach the receiver’s mail
reader.
We see that this
spam message follows a chain of MTAs.
An honest MTA should report where it receives
the message. Notice that in this
message,
“asu
sus-
4b96
([58.88.21.177])”
does not
report from where it received the
email. Since we assume only the originator is
dishonest,
so
“asusus
-
4b96
([58.88.21.177])”
must be the
originator.
Problem 16
UIDL abbreviates
“unique
-
ID listing”. When a
POP3 client
issues the UIDL command,
the
server
responds
with
the
unique
message
ID
for
all
of
the
messages
present
in
the
user'
s mailbox. This command
is useful for “download and keep”. By
maintaining a file
that
lists
the
messages
retrieved
during
earlier
sessions,
the
client
can
use
the
UIDL
command
to determine which messages on the server have
already been seen.
Problem 17
a)
C:
dele 1
C:
retr 2
S:
(blah blah …
S:
………..blah)
S:
.
C:
dele 2
C:
quit
S:
+OK POP3 server signing
off
b)
C:
retr 2
S:
blah blah
…
S:
………..blah
S:
.
C:
quit
S:
+OK POP3 server signing
off
c)
C:
list
S:
1 498
S:
2 912
S:
.
C:
retr 1
S:
blah
…..
S:
….blah
S:
.
C:
retr 2
S:
blah blah
…
S:
………..blah
S:
.
C:
quit
S:
+OK POP3 server signing
off
Problem 18
a)
For
a
given
input
of
domain
name
(such
as
),
IP
address
or
network
administrator
name,
the
whois
database
can
be
used
to
locate
the
corresponding
registrar,
whois server, DNS server, and so on.
b)
from from
c)
Local Domain:
Web servers :
207.69.189.21, 207.69.189.22,
207.69.189.23, 207.69.189.24,
207.69.189.25, 207.69.189.26,
207.69.189.27,
207.69.189.28
Mail Servers :
(207.69.189.217)
(207.69.189.218)
(207.69.189.219)
(207.69.189.220)
Name Servers:
(207.69.188.196)
(207.69.188.197)
Web Servers: (216.109.112.135,
66.94.234.13)
Mail Servers:
(209.191.118.103)
(66.196.97.250)
(68.142.237.182, 216.39.53.3)
(216.39.53.2)
(216.39.53.1)
(209.191.88.247, 68.142.202.247)
(209.191.88.239, 206.190.53.191)
Name Servers: (66.218.71.63)
(68.142.255.16)
(217.12.4.104)
(68.142.196.63)
(216.109.116.17)
(202.165.104.22)
(202.160.176.146)
Web Servers: (64.4.33.7,
64.4.32.7)
Mail
Servers: (65.54.245.8, 65.54.244.8,
65.54.244.136)
(65.54.244.40, 65.54.244.168, 65.54.245.40)
(65.54.244.72, 65.54.244.200, 65.54.245.72)
(65.54.244.232, 65.54.245.104, 65.54.244.104)
Name Servers:
(207.68.160.190)
(65.54.240.126)
(213.199.161.77)
(207.46.66.126)
(65.55.238.126)
d) The
yahoo web server has multiple IP addresses
(216.109.112.135,
66.94.234.13)
e) The
address range for Polytechnic University:
128.238.0.0
–
128.238.255.255
f) An attacker can use the
whois
database and nslookup
tool to determine the IP address
ranges, DNS server addresses, etc., for
the target institution.
g)
By analyzing
the source address of attack packets, the victim
can use whois to obtain
information
about domain from which the attack is coming and
possibly inform the
administrators of
the origin domain.
Problem
19
a)
The following delegation chain is used
for
(authoritative)
First command:
dig
+norecurse @ any
;;
AUTHORITY SECTION:
edu.
172800 IN NS .
edu.
172800 IN NS .
edu.
172800 IN NS .
edu.
172800 IN NS .
edu.
172800 IN NS .
edu.
172800 IN NS .
edu.
172800 IN NS .
edu.
172800 IN NS .
Among all returned edu DNS servers, we
send a query to the first one.
dig
+norecurse @ any
.
172800 IN NS .
.
172800 IN NS .
.
172800 IN NS .
Among all three returned authoritative
DNS servers, we send a query to the first one.
dig +norecurse @ any
. 21600 IN A
128.119.245.12
b)
The answer for
could be:
(authoritative)
Problem 20
We can periodically take a
snapshot of the DNS caches in
the
local
DNS
servers. The
Web server that
appears most frequently in the DNS caches is the
most popular server.
This is because if
more users are interested in a Web server, then
DNS requests for that
server are more
frequently sent by users. Thus, that Web server
will appear in the DNS
caches more
frequently.
For a complete
measurement study, see:
Craig E. Wills,
Mikhail Mikhailov, Hao Shang
“
Inferring
Relative
Popularity
of
Internet
Applications
by
Actively
Querying
DNS
Caches
”, in
IMC'03, October 27-
29, 2003,
Miami Beach, Florida, USA
Problem 21
Yes, we can use dig to
query that Web site in the local DNS server.
For example, “dig ” will return the
query time f
or finding . If
was just accessed a couple of seconds
ago, an entry for is cached in the local
DNS cache, so the query time is 0 msec.
Otherwise, the query time is large.
Problem 22
For
calculating the minimum distribution time for
client-server distribution, we use the
following formula:
D
cs
=
max {NF/u
s
,
F/d
min
}
Similarly, for calculating the minimum
distribution time for P2P distribution, we use the
following formula:
D
P
2
p>
P
?
max{F/u
s
, F/d
min
,
NF/(u
s
?
?
u
i
)}
i
?
1
N
Where,
F
= 15
Gbits = 15 * 1024 Mbits
u
s
= 30 Mbps
d
min
=
d
i
= 2 Mbps
Note,
300Kbps = 300/1024 Mbps.
Client Server
N
10
100
300 Kbps
7680
51200
u
700 Kbps
7680
51200
2 Mbps
7680
51200
1000
512000
512000
512000
Peer to Peer
u
Problem 23
a)
Consider
a
distribution
scheme
in
which
the
server
sends
the
file
to
each
client,
in
parallel, at a rate of a rate of
u
s
/
N
< br>. Note that this rate is less than each of the client’s
download rate, since by
assumption
u
s
/
N
≤
d
min
. Thus each
client can also receive at
rate
u
s
/
N
. Since each client receives at rate
u
s
/
N
,
the time for each client to receive the
entire
file
is
F/(
u
s
/
N
)
=
NF/
u
s
.
Since
all
the
clients
receive
the
file
in
NF/
u
s
,
the
overall distribution
time is also
NF/
u
s
.
b)
Consider
a
distribution
scheme
in
which
the
server
sends
the
file
to
each
client,
in
parallel, at a rate of
d
min
. Note that
the aggregate rate,
N
d
min
, is less
than the server’s
link rate
u
s
, since by
assumption
u
s
/
N
≥
d
min
. Since each
client receives at rate
d
min
,
the time for each client to receive the
entire file is F/
d
min
.
Since all
the clients receive
the file in this
time, the overall distribution time is also
F/
d
min
.
c)
From Section 2.6 we know that
D
CS
≥
ma
x
{
NF/u
s
,
F/d
min
} (Equation 1)
Suppose that
u
p>
s
/
N
≤
d
min
. Then from
Equation 1 we have
D
CS
≥
NF/u
s
.
But from (a)
we have
D
CS
≤
NF/u
s
.
Combining these two gives:
D
CS
=
NF/u
s
when
u
s
/
N
≤
d
min
. (Equation
2)
We can similarly show
that:
D
CS
=
F/d
min
when
u
s
/
N
≥
d
min
(Equation
3).
Combining Equation 2
and Equation 3 gives the desired result.
N
10
300 Kbps
7680
700
Kbps
7680
2 Mbps
7680
100
25904
15616
7680
1000
47559
21525
7680
Problem 24
a)
Define u = u1
+ u2
+ ….. + u
N. By
assumption
u
s
<=
(u
s
+ u)/N
Equation 1
Divide the file
into N parts, with the i
th
part having size (u
i
/u)F.
The server transmits
the
i
th
part to peer i at rate
r
i
= (
u
i
/
u
)<
/p>
u
s
. Note that
r
1
+
r
2
+
….. +
r
N
=
u
s
, so that the
aggregate server rate does not exceed
the link rate of the server. Also have each peer i
forward
the
bits
it
receives
to
each
of
the
N-1
peers
at
rate
r
i
.
The
aggregate
forwarding rate by peer i is
(N-1)r
i
. We have
(
N-1
)
r
i
= (
N-1
)(
u
s
u
i
)/
u
<=
u
i
,
where the last inequality
follows from Equation 1. Thus the aggregate
forwarding rate
of peer i is less than
its link rate u
i
.
In this distribution
scheme, peer i receives bits at an aggregate rate
of
r
i
?
?
r
j
?
u
s
j
??
i
Thus each peer receives the file in
F/u
s
.
b)
Again define
u
=
u
1
+
u
2
+
….. +
u
N
. By
assumption
u
s
>=
(
u
s
+
u
)/
N
Equation 2
Let
r
i
=
u
i
/(
N-1
) and
r
N+1
=
(
u
s
–
u
/(<
/p>
N-1
))/
N
In this distribution
scheme, the file is broken into
N+1
parts. The server sends
bits
from the i
th
part to the i
th
peer (i =
1, …., N
) at rate
r
i
. Each peer i forwards the
bits
arriving at rate
r
i
to each of the other N-1
peers. Additionally, the server sends bits
from the
(
N+1
)
st
part at rate
r
N+1
to each of the
N
peers. The peers do not
forward the
bits from the (
N
+1
)
st
part.
The aggregate send rate of
the server is
r
1
+ …. +
r
N
+
N
r
N+1
=
u
/(
N-1
) +
u
s
–
u
/(<
/p>
N-1
) =
u
s
Thus, the server’s send rate does not
exceed its link rate. The aggregate send rate of
peer i is
(
N-1
)
r
i
=
u
i
Thus, each peer’s send rate does not
exceed its link rate.
In
this distribution scheme, peer i receives bits at
an aggregate rate of
r
i
?
r
N
?
1
?
< br>?
r
j
?
u
/(
N
?
1
)
?
(
u<
/p>
s
?
u
/(
p>
N
?
1
))
/
N
?
(
u
s
?
u
)
/
N
j
??
i
Thus each peer
receives the file in
NF/(u
s
+u).
(For
simplicity, we neglected to specify the size of
the file part for i = 1, …., N+1.
We now provide th
at here.
Let Δ = (u
s
+u)/N be the
distribution time. For i = 1, …, N,
the
i
th
file
part
is
F
i
=
r
i
Δ
bits.
The
(N+1)
st
file
part
is
F
N+1
=
r
N+1
Δ
bits.
It
is
straightforward to show
that F
1
+ ….. +
F
N+1
= F.)
c)
The solution
to this part is similar to that of 17 (c). We know
from section 2.6 that
D
P
2
P
??
max{F/u
s
,
NF/(u
s
?
u)}
Combining
this with a) and b) gives the desired result.
Problem 25
There are
N
nodes
in the overlay network. There are
N(N-1)/2
edges.
Problem 26
Yes.
His first claim is possible, as long as there are
enough peers staying in the swarm for
a
long enough time. Bob can always receive data
through optimistic unchoking by other
peers.
His
second claim is also true. He can run a client on
each host, let each client
“free
-
ride,”
and combine the collected chunks from
the different hosts into a single file. He can
even
write a small scheduling program
to make the different hosts ask for different
chunks of
the file. This is actually a
kind of Sybil attack in P2P networks.
Problem 27
a.
N files, under the assumption that we
do a one-to-one matching by pairing video
versions with audio versions in a
decreasing order of quality and rate.
b.
2N files.
Problem 28
a)
If you run
TCPClient first, then the client will attempt to
make a TCP connection with
a non-
existent server process. A TCP connection will not
be made.
b)
UDPClient
doesn't
establish
a
TCP
connection
with
the
server.
Thus,
everything
should work fine
if you first run UDPClient, then run UDPServer,
and then type some
input into the
keyboard.
c)
If
you
use
different
port
numbers,
then
the
client
will
attempt
to
establish
a
TCP
connection with the wrong process or a
non-existent process. Errors will occur.
Problem 29
In the original program, UDPClient does
not specify a port number when it creates the
socket. In this case, the code lets the
underlying operating system choose a port number.
With the additional line, when
UDPClient is executed, a UDP socket is created
with port
number 5432 .
UDPServer needs to know the client port
number so that it can send packets back to the
correct client socket. Glancing at
UDPServer, we see that the client port number is
not
“hard
-
wired”
into the server code; instead, UDPServer
determines
the client port number
by unraveling the datagram it receives
from the client. Thus UDP server will work with
any
client
port
number,
including
5432.
UDPServer
therefore
does
not
need
to
be
modified.
Before:
Client
socket = x (chosen by OS)
Server socket
= 9876
After:
Client socket = 5432
Problem 30
Yes,
you can configure many browsers to open multiple
simultaneous connections to a
Web
site.
The
advantage
is
that
you
will
you
potentially
download
the
file
faster.
The
disadvantage
is
that
you
may
be
hogging
the
bandwidth,
thereby
significantly
slowing
down the downloads
of other users who are sharing the same physical
links.
Problem 31
For an application such as
remote login (telnet and ssh), a byte-stream
oriented protocol
is very natural since
there is no notion of message boundaries in the
application. When a
user types a
character, we simply drop the character into the
TCP connection.
In
other
applications,
we
may
be
sending
a
series
of
messages
that
have
inherent
boundaries
between
them.
For
example,
when
one
SMTP
mail
server
sends
another
SMTP
mail
server
several
email
messages
back
to
back.
Since
TCP
does
not
have
a
mechanism to indicate the
boundaries, the application must add the
indications itself, so
that receiving
side of the application can distinguish one
message from the next. If each
message
were instead put into a distinct UDP segment, the
receiving end would be able to
distinguish the various
messages without any indications added by the
sending side of the
application.
Problem 32
To create a web server, we need to run
web server software on a host. Many vendors sell
web server software. However, the most
popular web server software today is Apache,
which is open source and free. Over the
years it has been highly optimized by the
open-
source community.
Chapter 3 Review Questions
1.
a)
Call this
protocol Simple Transport Protocol (STP). At the
sender side, STP accepts
from the
sending process a chunk of data not exceeding 1196
bytes, a destination host
address, and
a destination port number. STP adds a four-byte
header to each chunk
and puts the port
number of the destination process in this header.
STP then gives the
destination host
address and the resulting segment to the network
layer. The network
layer delivers the
segment to STP at the destination host. STP then
examines the port
number in the
segment, extracts the data from the segment, and
passes the data to the
process
identified by the port number.
b)
The segment
now has two header fields: a source port field and
destination port field.
At
the
sender
side,
STP
accepts
a
chunk
of
data
not
exceeding
1192
bytes,
a
destination host address, a source port
number, and a destination port number.
S
TP
creates
a
segment
which
contains
the
application
data,
source
port
number,
and
destination port number. It then gives
the segment and the destination host address to
the network layer. After receiving the
segment, STP at the receiving host gives the
application process the application
data and the source port number.
c)
No, the
transport layer does not have to do anything in
the core; the transport
layer
“lives” in the end systems.
2.
1.
For sending a
letter, the family member is required to give the
delegate the letter itself,
the
address
of
the
destination
house,
and
the
name
of
the
recipient.
The
delegate
clearly writes the recipient’s name on
the top of the letter. The delegate
then puts the
letter in an envelope and
writes the address of the destination house on the
envelope.
The delegate then gives the
letter to the planet’s mail service. At the
receiving side,
the
delegate
receives
the
letter
from
the
mail
service,
takes
the
letter
out
of
the
envelope,
and
takes
note
of
the
recipient
name
written
at
the
top
of
the
letter.
The
delegate then gives the letter to the
family member with this name.
2.
No, the mail
service does not have to open the envelope; it
only examines the address
on the
envelope.
3.
Source port number y and destination
port number x.
4.
An
application
developer
may
not
want
its
application
to
use
TCP’s
congestion
control, which
can throttle the application’s sending rate at
times
of congestion. Often,
designers
of
IP
telephony
and
IP
videoconference
applications
choose
to
run
their
applications over UDP
because they want to avoid TCP’s congestion
control. Also,
some applications do not
need the reliable data transfer provided by TCP.
5.
Since most
firewalls are configured to
block UDP
traffic, using TCP for video and
voice
traffic lets the traffic though the firewalls.
6.
Yes. The application developer can put
reliable data transfer into the application layer
protocol. This would require a
significant amount of work and debugging, however.
7.
Yes, both segments will be directed to
the same socket. For each received segment, at
the
socket
interface,
the
operating
system
will
provide
the
process
with
the
IP
addresses to determine
the origins of the individual segments.
8.
For each persistent connection, the Web
server creates a separate “connection socket”.
Each connection socket is identified
with a four-tuple: (source IP address, source port
number, destination IP address,
destination port number). When host C receives and
IP datagram,
it examines
these four fields
in
the
datagram/segment to
determine to
which socket it should pass the payload
of the TCP segment. Thus, the requests from
A and B pass through different sockets.
The identifier for both of these sockets has 80
for
the
destination
port;
however,
the
identifiers
for
these
sockets
have
different
values for source IP addresses. Unlike
UDP, when the transport layer passes a TCP
segment’s payload to the application
process, it does not
specify the source
IP address,
as this is implicitly
specified by the socket identifier.
9.
Sequence numbers are required for a
receiver to find out whether an arriving packet
contains new data or is a
retransmission.
10.
To handle
losses in the channel. If the ACK for a
transmitted packet is not received
within the duration of the timer for
the packet, the packet (or its ACK or NACK) is
assumed to have been lost. Hence, the
packet is retransmitted.
11.
A
timer
would
still
be
necessary
in
the
protocol
rdt
3.0.
If
the
round
trip
time
is
known then the only
advantage will be that, the sender knows for sure
that either the
packet or the ACK (or
NACK) for the packet has been lost, as compared to
the real
scenario, where the ACK (or
NACK) might still be on the way to the sender,
after the
timer
expires.
However,
to
detect
the
loss,
for
each
packet,
a
timer
of
constant
duration will still
be necessary at the sender.
12.
a)
The packet
loss caused a time out after which all the five
packets were retransmitted.
b)
Loss
of
an
ACK
didn’t
trigger
any
retransmission
as
Go
-Back-N
uses
cumulative
acknowledgements.
c)
The sender was unable to send sixth
packet as the send window size is fixed to 5.
13.
a)
When the packet was lost, the received
four packets were buffered the receiver. After
the timeout, sender retransmitted the
lost packet and receiver delivered the buffered
packets to application in correct
order.
b)
Duplicate ACK was sent by the receiver
for the lost ACK.
c)
The sender was
unable to send sixth packet as the send window
size is fixed to 5
When a
packet was lost, GO-Back-N retransmitted all the
packets whereas Selective
Repeat
retransmitted the lost packet only. In case of
lost acknowledgement, selective
repeat
sent a duplicate ACK and as GO-Back-N used
cumulative acknowledgment, so
that
duplicate ACK was unnecessary.
14.
a) false b)
false c) true d) false e) true f) false
g) false
15.
a) 20 bytes b) ack number = 90
16.
3
segments. First segment: seq = 43, ack =80; Second
segment: seq = 80, ack = 44;
Third
segment; seq = 44, ack = 81
17.
R/2
18.
False, it is set to half of the current
value of the congestion window.
19.
Let
X
=
RTT
FE
,
Y
=
RTT
BE
and
ST
=
Search
time.
Consider
the
following
timing
diagram.
TCP packet exchange diagram
between a client and a server (Back End) with a
proxy
(Front End) between them.
From this diagram we see
that the total time is 4X + Y+ ST = 4*RTTFE +
RTTBE +
Search time
Chapter 3 Problems
Problem 1
a) A
?
S
b) B
?
S
c) S
?
A
d) S
?
B
e) Yes.
f) No.
source port
numbers
467
513
23
23
destination
port
numbers
23
23
467
513
Problem 2
Suppose the IP addresses of the hosts
A, B, and C are a, b, c, respectively. (Note that
a, b,
c are distinct.)
To host A: Source port =80, source IP
address = b, dest port = 26145, dest IP address =
a
To host C, left process:
Source port =80, source IP address = b, dest port
= 7532, dest IP
address = c
To host C, right process: Source port
=80, source IP address = b, dest port = 26145,
dest
IP address = c
Problem 3
Note,
wrap around if overflow.
0<
/p>
1
0
1
0
0
1
1
?
0
1
1
0
0
1
1
0
1
0
1
1
1
0
0
1
1
p>
0
1
1
1
0
0
1
?
0
1
1
1
< br>0
1
0
0
0
0
1
0
1
1
1
0
One's complement = 1 1 0 1
0 0 0 1.
To
detect
errors,
the
receiver
adds
the
four
words
(the
three
original
words
and
the
checksum).
If the sum
contains a zero, the receiver knows there has been
an error. All
one-bit errors will be
detected, but two-bit errors can be undetected
(e.g., if the last digit
of the first
word is converted to a 0 and the last digit of the
second word is converted to a
1).
Problem 4
a)
Adding the two bytes gives 11000001. Taking the
one’s complement gives 00111110.
b) Adding the two bytes
gives 01000000; the one’s complement gives
10111111.
c)
First byte = 01010100; second byte = 01101101.
Problem 5
No,
the
receiver
cannot
be
absolutely
certain
that
no
bit
errors
have
occurred.
This
is
because
of
the
manner
in
which
the
checksum
for
the
packet
is
calculated.
If
the
corresponding bits (that
would be added together) of two 16-bit words in
the packet were
0 and 1 then even if
these get flipped to 1 and 0 respectively, the sum
still remains the
same.
Hence,
the
1s
complement
the
receiver
calculates
will
also
be
the
same.
This
means the checksum will verify even if
there was transmission error.
Problem 6
Suppose the sender is in state “Wait
for call 1 from above” and the receiver (the
receiver
shown in the homework problem)
is in state “Wait for 1 from below.” The sender
sends
a packet with sequence
number 1, and transitions to “Wait for
ACK or NAK 1,” waiting
for
an
ACK
or
NAK.
Suppose
now
the
receiver
receives
the
packet
with
sequence
number
1
correctly,
sends
an
ACK,
and
transitions
to
state
“Wait
for
0
from
below,”
waiting
for
a
data
packet
with
sequence
number
0.
However,
the
ACK
is
corrupted.
When
the
rdt2.1
sender
gets
the
corrupted
ACK,
it
resends
the
packet
with
sequence
number 1. However,
the receiver is waiting for a packet with sequence
number 0 and (as
shown in the home work
problem) always sends a NAK when it doesn't get a
packet with
sequence
number
0.
Hence
the
sender
will
always
be
sending
a
packet
with
sequence
number
1,
and
the
receiver
will
always
be
NAKing
that
packet.
Neither
will
progress
forward from that
state.
Problem 7
To
best
answer
this
question,
consider
why
we
needed
sequence
numbers
in
the
first
place. We saw that the sender needs
sequence numbers so that the receiver can tell if
a
data packet is a duplicate of an
already received data packet. In the case of
ACKs, the
sender
does
not
need
this
info
(i.e.,
a
sequence
number
on
an
ACK)
to
tell
detect
a
duplicate
ACK.
A
duplicate
ACK
is
obvious
to
the
rdt3.0
receiver,
since
when
it
has
received the original ACK it
transitioned to the next state. The duplicate ACK
is not the
ACK that the sender needs
and hence is ignored by the rdt3.0 sender.
Problem 8
The
sender
side
of
protocol
rdt3.0
differs
from
the
sender
side
of
protocol
2.2
in
that
timeouts
have
been
added.
We
have
seen
that
the
introduction
of
timeouts
adds
the
possibility
of
duplicate
packets
into
the
sender-
to-receiver
data
stream.
However,
the
receiver
in
protocol
rdt.2.2
can
already
handle
duplicate
packets.
(Receiver-side
duplicates in rdt 2.2 would arise if
the receiver sent an ACK that was lost, and the
sender
then retransmitted the old
data). Hence the receiver in protocol rdt2.2 will
also work as
the receiver in protocol
rdt 3.0.
Problem 9
Suppose
the protocol has been in operation for some time.
The sender is in state “Wait
for call
from above” (top
left hand
corner) and the receiver is in state “Wait for 0
from
below”. The scenarios for
corrupted data and corrupted ACK are shown in
Figure 1.
Sender sends M0
Sender
ignores A1
M0 corrupted
A1
Packet garbled, receiver
resends last ACK (A1)
Timeout: sender
resends M0
M0
A0
M1
A1
Corrupted
data
sender sends M0
sender sends M1
Ignore ACK
Timeout: sender
resends M1
M0
A0
M1
A1 corrupted
Corrupted
ACK
M1
A1
M0
Figure 1:
rdt 3.0 scenarios: corrupted data, corrupted
ACK
Problem 10
Here, we add a
timer, whose value is greater than the known
round-trip propagation delay.
We add a
timeout event to the “Wait for ACK or NAK0” and
“Wait for ACK or NAK1”
states. If the
timeout event occurs, the most recently
transmitted packet is retransmitted.
Let us see why this protocol will still
work with the rdt2.1 receiver.
?
Suppose the
timeout is caused by a lost data packet, i.e., a
packet on the sender-
to-receiver
channel.
In
this
case,
the
receiver
never
received
the
previous
transmission and,
from
the receiver's
viewpoint, if the timeout
retransmission
is
received,
it
looks
exactly
the
same
as
if
the
original
transmission
is
being
received.
?
Suppose
now
that
an
ACK
is
lost.
The
receiver
will
eventually
retransmit
the
packet
on a timeout.
But
a retransmission is
exactly the same action that if an
ACK
is
garbled.
Thus
the
sender's
reaction
is
the
same
with
a
loss,
as
with
a
garbled ACK. The rdt 2.1 receiver can
already handle the case of a garbled ACK.
Problem 11
If the sending of
this message were removed, the sending and
receiving sides would
deadlock, waiting
for an event that would never occur.
Here’s a scenario:
?
Sender sends
pkt0, enter the “Wait for ACK0 state”, and waits
for a packet back
from the receiver
?
Receiver is in
the “Wait for 0 from below” state, and receives a
corrupted packet
from the sender.
Suppose it does not send anything back, and simply
re-enters the
‘wait for 0 from below”
state.
Now,
the ender is awaiting an ACK of some sort from the
receiver, and the receiver is
waiting
for a data packet form the sender
–
a deadlock!
Problem 12
The protocol
would still work, since a retransmission would be
what would happen if the
packet
received
with
errors
has
actually
been
lost
(and
from
the
receiver
standpoint,
it
never knows which of these events, if
either, will occur).
To
get
at
the
more
subtle
issue
behind
this
question,
one
has
to
allow
for
premature
timeouts
to
occur.
In
this
case,
if
each
extra
copy
of
the
packet
is
ACKed
and
each
received
extra
ACK
causes
another
extra
copy
of
the
current
packet
to
be
sent,
the
number of times packet
n
is sent will increase
without bound as
n
approaches
infinity.
Problem 13
M0
A0
M1
A1
M0
M0
A0
M1
A1
old version of M0
accepted!
Problem 14
In a
NAK only protocol, the loss of packet
x
is only detected by the
receiver when packet
x+1
is
received.
That
is,
the
receivers
receives
x-1
and
then
x+1,
only
when
x+1
is
received does the
receiver realize that
x
was
missed. If there is a long delay between the
transmission of x and the transmission
of
x+1,
then it will be a
long time until
x
can be
recovered, under a NAK only protocol.
On the other hand, if data
is being sent often, then recovery under a NAK-
only scheme
could
happen
quickly.
Moreover,
if
errors
are
infrequent,
then
NAKs
are
only
occasionally
sent
(when
needed),
and
ACK
are
never
sent
–
a
significant
reduction
in
feedback in the NAK-only
case over the ACK-only case.
Problem
15
It
takes
12
microseconds
(or
0.012
milliseconds)
to
send
a
packet,
as
1500*8/10
9
=12
microseconds. In order for the sender
to be busy 98 percent of the time, we must have
p>
util
?
0
.<
/p>
98
?
(
0
p>
.
012
n
)
p>
/
30
.
012<
/p>
or
n
approximately 2451 packets.
Problem 16
Yes. This actually causes the sender to
send a number of pipelined data into the channel.
Yes.
Here
is
one
potential
problem.
If
data
segments
are
lost
in
the
channel,
then
the
sender
of
rdt
3.0
won’t
re
-send
those
segments,
unless
there
are
some
additional
mechanism in the
application to recover from loss.
Problem 17
rdt_send(data)
packet=make_pkt(data)
udt_send(packet)
Wait: send
to B
rdt_send(data)
Rdt_unable_to_send(data)
A
Wait:
receive
from B
rdt_send(data)
rdt_unable_to_send(data)
rdt_receive(packet)
extract(packet,data)
deliver_data(data)
rdt_send(data)
packet=make_pkt(data)
udt_send(packet)
Wait: send
to A
rdt_send(data)
Rdt_unable_to_send(data)
B
Wait:
receive
from A
rdt_send(data)
rdt_unable_to_send(data)
rdt_receive(packet)
extract(packet,data)
deliver_data(data)
Problem 18
In
our
solution,
the
sender
will
wait
until
it
receives
an
ACK
for
a
pair
of
messages
(seqnum and seqnum+1) before moving on
to the next pair of messages. Data packets
have
a
data
field
and
carry
a
two-bit
sequence
number.
That
is,
the
valid
sequence
numbers are 0, 1, 2, and 3. (Note: you
should think about why a 1-bit sequence number
space
of
0,
1
only
would
not
work
in
the
solution
below.)
ACK
messages
carry
the
sequence number of the
data packet they are acknowledging.
The FSM for the sender and receiver are
shown in Figure 2. Note that the sender state
records
whether
(i)
no
ACKs
have
been
received
for
the
current
pair,
(ii)
an
ACK
for
seqnum (only) has been
received, or an ACK for seqnum+1 (only) has been
received. In
this figure, we assume
that the seqnum is initially 0, and that the
sender has sent the first
two
data
messages
(to
get
things
going).
A
timeline
trace
for
the
sender
and
receiver
recovering from a lost packet is shown
below:
Figure 2: Sender and receiver for
Problem (3.18)
Sender
make pair
(0,1)
send packet 0
Receiver